I0504 11:05:46.918523 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0504 11:05:46.918694 7 e2e.go:124] Starting e2e run "c3d571a7-3318-49f9-9e98-d2363c01e166" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1588590345 - Will randomize all specs Will run 275 of 4992 specs May 4 11:05:46.983: INFO: >>> kubeConfig: /root/.kube/config May 4 11:05:46.987: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 4 11:05:47.017: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 4 11:05:47.061: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 4 11:05:47.061: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 4 11:05:47.061: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 4 11:05:47.072: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 4 11:05:47.072: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 4 11:05:47.072: INFO: e2e test version: v1.18.2 May 4 11:05:47.073: INFO: kube-apiserver version: v1.18.2 May 4 11:05:47.073: INFO: >>> kubeConfig: /root/.kube/config May 4 11:05:47.077: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:05:47.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook May 4 11:05:47.151: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 11:05:47.637: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 11:05:49.647: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187147, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187147, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187147, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187147, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 11:05:51.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187147, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187147, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187147, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187147, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 11:05:54.683: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:05:54.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:05:55.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1656" for this suite. STEP: Destroying namespace "webhook-1656-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.992 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":1,"skipped":15,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:05:56.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-71322426-ec18-4b77-9c56-06241f2df305 in namespace container-probe-9127 May 4 11:06:00.158: INFO: Started pod busybox-71322426-ec18-4b77-9c56-06241f2df305 in namespace container-probe-9127 STEP: checking the pod's current state and verifying that restartCount is present May 4 11:06:00.160: INFO: Initial restart count of pod busybox-71322426-ec18-4b77-9c56-06241f2df305 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:10:01.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9127" for this suite. • [SLOW TEST:244.962 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":2,"skipped":66,"failed":0} SSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:10:01.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:10:01.106: INFO: The status of Pod test-webserver-8be13cf9-a126-4e1b-8f3c-339fd6e0d70c is Pending, waiting for it to be Running (with Ready = true) May 4 11:10:03.110: INFO: The status of Pod test-webserver-8be13cf9-a126-4e1b-8f3c-339fd6e0d70c is Pending, waiting for it to be Running (with Ready = true) May 4 11:10:05.110: INFO: The status of Pod test-webserver-8be13cf9-a126-4e1b-8f3c-339fd6e0d70c is Running (Ready = false) May 4 11:10:07.110: INFO: The status of Pod test-webserver-8be13cf9-a126-4e1b-8f3c-339fd6e0d70c is Running (Ready = false) May 4 11:10:09.110: INFO: The status of Pod test-webserver-8be13cf9-a126-4e1b-8f3c-339fd6e0d70c is Running (Ready = false) May 4 11:10:11.111: INFO: The status of Pod test-webserver-8be13cf9-a126-4e1b-8f3c-339fd6e0d70c is Running (Ready = false) May 4 11:10:13.110: INFO: The status of Pod test-webserver-8be13cf9-a126-4e1b-8f3c-339fd6e0d70c is Running (Ready = false) May 4 11:10:15.110: INFO: The status of Pod test-webserver-8be13cf9-a126-4e1b-8f3c-339fd6e0d70c is Running (Ready = false) May 4 11:10:17.110: INFO: The status of Pod test-webserver-8be13cf9-a126-4e1b-8f3c-339fd6e0d70c is Running (Ready = false) May 4 11:10:19.110: INFO: The status of Pod test-webserver-8be13cf9-a126-4e1b-8f3c-339fd6e0d70c is Running (Ready = true) May 4 11:10:19.112: INFO: Container started at 2020-05-04 11:10:03 +0000 UTC, pod became ready at 2020-05-04 11:10:19 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:10:19.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9547" for this suite. • [SLOW TEST:18.086 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":73,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:10:19.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-733 STEP: creating replication controller nodeport-test in namespace services-733 I0504 11:10:19.230837 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-733, replica count: 2 I0504 11:10:22.281538 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 11:10:25.281764 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 11:10:25.281: INFO: Creating new exec pod May 4 11:10:30.315: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-733 execpodmfgcw -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 4 11:10:33.381: INFO: stderr: "I0504 11:10:33.276652 34 log.go:172] (0xc00003b550) (0xc000645900) Create stream\nI0504 11:10:33.276716 34 log.go:172] (0xc00003b550) (0xc000645900) Stream added, broadcasting: 1\nI0504 11:10:33.280055 34 log.go:172] (0xc00003b550) Reply frame received for 1\nI0504 11:10:33.280094 34 log.go:172] (0xc00003b550) (0xc0006459a0) Create stream\nI0504 11:10:33.280119 34 log.go:172] (0xc00003b550) (0xc0006459a0) Stream added, broadcasting: 3\nI0504 11:10:33.281095 34 log.go:172] (0xc00003b550) Reply frame received for 3\nI0504 11:10:33.281337 34 log.go:172] (0xc00003b550) (0xc000310500) Create stream\nI0504 11:10:33.281358 34 log.go:172] (0xc00003b550) (0xc000310500) Stream added, broadcasting: 5\nI0504 11:10:33.282466 34 log.go:172] (0xc00003b550) Reply frame received for 5\nI0504 11:10:33.373850 34 log.go:172] (0xc00003b550) Data frame received for 5\nI0504 11:10:33.373878 34 log.go:172] (0xc000310500) (5) Data frame handling\nI0504 11:10:33.373924 34 log.go:172] (0xc000310500) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0504 11:10:33.374393 34 log.go:172] (0xc00003b550) Data frame received for 5\nI0504 11:10:33.374415 34 log.go:172] (0xc000310500) (5) Data frame handling\nI0504 11:10:33.374454 34 log.go:172] (0xc000310500) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0504 11:10:33.374676 34 log.go:172] (0xc00003b550) Data frame received for 5\nI0504 11:10:33.374696 34 log.go:172] (0xc000310500) (5) Data frame handling\nI0504 11:10:33.374932 34 log.go:172] (0xc00003b550) Data frame received for 3\nI0504 11:10:33.374964 34 log.go:172] (0xc0006459a0) (3) Data frame handling\nI0504 11:10:33.376446 34 log.go:172] (0xc00003b550) Data frame received for 1\nI0504 11:10:33.376462 34 log.go:172] (0xc000645900) (1) Data frame handling\nI0504 11:10:33.376477 34 log.go:172] (0xc000645900) (1) Data frame sent\nI0504 11:10:33.376489 34 log.go:172] (0xc00003b550) (0xc000645900) Stream removed, broadcasting: 1\nI0504 11:10:33.376513 34 log.go:172] (0xc00003b550) Go away received\nI0504 11:10:33.376840 34 log.go:172] (0xc00003b550) (0xc000645900) Stream removed, broadcasting: 1\nI0504 11:10:33.376864 34 log.go:172] (0xc00003b550) (0xc0006459a0) Stream removed, broadcasting: 3\nI0504 11:10:33.376882 34 log.go:172] (0xc00003b550) (0xc000310500) Stream removed, broadcasting: 5\n" May 4 11:10:33.381: INFO: stdout: "" May 4 11:10:33.382: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-733 execpodmfgcw -- /bin/sh -x -c nc -zv -t -w 2 10.99.17.84 80' May 4 11:10:33.602: INFO: stderr: "I0504 11:10:33.511939 62 log.go:172] (0xc0009de790) (0xc000a66320) Create stream\nI0504 11:10:33.512006 62 log.go:172] (0xc0009de790) (0xc000a66320) Stream added, broadcasting: 1\nI0504 11:10:33.515117 62 log.go:172] (0xc0009de790) Reply frame received for 1\nI0504 11:10:33.515154 62 log.go:172] (0xc0009de790) (0xc0006992c0) Create stream\nI0504 11:10:33.515168 62 log.go:172] (0xc0009de790) (0xc0006992c0) Stream added, broadcasting: 3\nI0504 11:10:33.516124 62 log.go:172] (0xc0009de790) Reply frame received for 3\nI0504 11:10:33.516153 62 log.go:172] (0xc0009de790) (0xc000563680) Create stream\nI0504 11:10:33.516166 62 log.go:172] (0xc0009de790) (0xc000563680) Stream added, broadcasting: 5\nI0504 11:10:33.517315 62 log.go:172] (0xc0009de790) Reply frame received for 5\nI0504 11:10:33.592861 62 log.go:172] (0xc0009de790) Data frame received for 3\nI0504 11:10:33.592896 62 log.go:172] (0xc0006992c0) (3) Data frame handling\nI0504 11:10:33.592934 62 log.go:172] (0xc0009de790) Data frame received for 5\nI0504 11:10:33.592969 62 log.go:172] (0xc000563680) (5) Data frame handling\nI0504 11:10:33.592983 62 log.go:172] (0xc000563680) (5) Data frame sent\nI0504 11:10:33.592994 62 log.go:172] (0xc0009de790) Data frame received for 5\nI0504 11:10:33.593004 62 log.go:172] (0xc000563680) (5) Data frame handling\n+ nc -zv -t -w 2 10.99.17.84 80\nConnection to 10.99.17.84 80 port [tcp/http] succeeded!\nI0504 11:10:33.595515 62 log.go:172] (0xc0009de790) Data frame received for 1\nI0504 11:10:33.595553 62 log.go:172] (0xc000a66320) (1) Data frame handling\nI0504 11:10:33.595590 62 log.go:172] (0xc000a66320) (1) Data frame sent\nI0504 11:10:33.595745 62 log.go:172] (0xc0009de790) (0xc000a66320) Stream removed, broadcasting: 1\nI0504 11:10:33.595780 62 log.go:172] (0xc0009de790) Go away received\nI0504 11:10:33.596278 62 log.go:172] (0xc0009de790) (0xc000a66320) Stream removed, broadcasting: 1\nI0504 11:10:33.596308 62 log.go:172] (0xc0009de790) (0xc0006992c0) Stream removed, broadcasting: 3\nI0504 11:10:33.596328 62 log.go:172] (0xc0009de790) (0xc000563680) Stream removed, broadcasting: 5\n" May 4 11:10:33.602: INFO: stdout: "" May 4 11:10:33.602: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-733 execpodmfgcw -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 31422' May 4 11:10:33.818: INFO: stderr: "I0504 11:10:33.734530 84 log.go:172] (0xc000a891e0) (0xc000a6c500) Create stream\nI0504 11:10:33.734600 84 log.go:172] (0xc000a891e0) (0xc000a6c500) Stream added, broadcasting: 1\nI0504 11:10:33.737051 84 log.go:172] (0xc000a891e0) Reply frame received for 1\nI0504 11:10:33.737080 84 log.go:172] (0xc000a891e0) (0xc000ae03c0) Create stream\nI0504 11:10:33.737090 84 log.go:172] (0xc000a891e0) (0xc000ae03c0) Stream added, broadcasting: 3\nI0504 11:10:33.738088 84 log.go:172] (0xc000a891e0) Reply frame received for 3\nI0504 11:10:33.738110 84 log.go:172] (0xc000a891e0) (0xc000c5a280) Create stream\nI0504 11:10:33.738117 84 log.go:172] (0xc000a891e0) (0xc000c5a280) Stream added, broadcasting: 5\nI0504 11:10:33.738823 84 log.go:172] (0xc000a891e0) Reply frame received for 5\nI0504 11:10:33.808218 84 log.go:172] (0xc000a891e0) Data frame received for 3\nI0504 11:10:33.808250 84 log.go:172] (0xc000ae03c0) (3) Data frame handling\nI0504 11:10:33.808817 84 log.go:172] (0xc000a891e0) Data frame received for 5\nI0504 11:10:33.808831 84 log.go:172] (0xc000c5a280) (5) Data frame handling\nI0504 11:10:33.808841 84 log.go:172] (0xc000c5a280) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.15 31422\nConnection to 172.17.0.15 31422 port [tcp/31422] succeeded!\nI0504 11:10:33.809100 84 log.go:172] (0xc000a891e0) Data frame received for 5\nI0504 11:10:33.809260 84 log.go:172] (0xc000c5a280) (5) Data frame handling\nI0504 11:10:33.813546 84 log.go:172] (0xc000a891e0) Data frame received for 1\nI0504 11:10:33.813575 84 log.go:172] (0xc000a6c500) (1) Data frame handling\nI0504 11:10:33.813600 84 log.go:172] (0xc000a6c500) (1) Data frame sent\nI0504 11:10:33.813687 84 log.go:172] (0xc000a891e0) (0xc000a6c500) Stream removed, broadcasting: 1\nI0504 11:10:33.813718 84 log.go:172] (0xc000a891e0) Go away received\nI0504 11:10:33.814000 84 log.go:172] (0xc000a891e0) (0xc000a6c500) Stream removed, broadcasting: 1\nI0504 11:10:33.814014 84 log.go:172] (0xc000a891e0) (0xc000ae03c0) Stream removed, broadcasting: 3\nI0504 11:10:33.814023 84 log.go:172] (0xc000a891e0) (0xc000c5a280) Stream removed, broadcasting: 5\n" May 4 11:10:33.818: INFO: stdout: "" May 4 11:10:33.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-733 execpodmfgcw -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 31422' May 4 11:10:34.020: INFO: stderr: "I0504 11:10:33.942228 106 log.go:172] (0xc000bbf970) (0xc000962be0) Create stream\nI0504 11:10:33.942297 106 log.go:172] (0xc000bbf970) (0xc000962be0) Stream added, broadcasting: 1\nI0504 11:10:33.947297 106 log.go:172] (0xc000bbf970) Reply frame received for 1\nI0504 11:10:33.947363 106 log.go:172] (0xc000bbf970) (0xc0005415e0) Create stream\nI0504 11:10:33.947398 106 log.go:172] (0xc000bbf970) (0xc0005415e0) Stream added, broadcasting: 3\nI0504 11:10:33.948643 106 log.go:172] (0xc000bbf970) Reply frame received for 3\nI0504 11:10:33.948677 106 log.go:172] (0xc000bbf970) (0xc0003eaa00) Create stream\nI0504 11:10:33.948686 106 log.go:172] (0xc000bbf970) (0xc0003eaa00) Stream added, broadcasting: 5\nI0504 11:10:33.949801 106 log.go:172] (0xc000bbf970) Reply frame received for 5\nI0504 11:10:34.014194 106 log.go:172] (0xc000bbf970) Data frame received for 3\nI0504 11:10:34.014231 106 log.go:172] (0xc0005415e0) (3) Data frame handling\nI0504 11:10:34.014255 106 log.go:172] (0xc000bbf970) Data frame received for 5\nI0504 11:10:34.014270 106 log.go:172] (0xc0003eaa00) (5) Data frame handling\nI0504 11:10:34.014281 106 log.go:172] (0xc0003eaa00) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.18 31422\nConnection to 172.17.0.18 31422 port [tcp/31422] succeeded!\nI0504 11:10:34.015039 106 log.go:172] (0xc000bbf970) Data frame received for 5\nI0504 11:10:34.015052 106 log.go:172] (0xc0003eaa00) (5) Data frame handling\nI0504 11:10:34.016018 106 log.go:172] (0xc000bbf970) Data frame received for 1\nI0504 11:10:34.016082 106 log.go:172] (0xc000962be0) (1) Data frame handling\nI0504 11:10:34.016100 106 log.go:172] (0xc000962be0) (1) Data frame sent\nI0504 11:10:34.016114 106 log.go:172] (0xc000bbf970) (0xc000962be0) Stream removed, broadcasting: 1\nI0504 11:10:34.016127 106 log.go:172] (0xc000bbf970) Go away received\nI0504 11:10:34.016420 106 log.go:172] (0xc000bbf970) (0xc000962be0) Stream removed, broadcasting: 1\nI0504 11:10:34.016433 106 log.go:172] (0xc000bbf970) (0xc0005415e0) Stream removed, broadcasting: 3\nI0504 11:10:34.016439 106 log.go:172] (0xc000bbf970) (0xc0003eaa00) Stream removed, broadcasting: 5\n" May 4 11:10:34.020: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:10:34.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-733" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:14.909 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":4,"skipped":81,"failed":0} S ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:10:34.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2193, will wait for the garbage collector to delete the pods May 4 11:10:40.234: INFO: Deleting Job.batch foo took: 41.468503ms May 4 11:10:40.334: INFO: Terminating Job.batch foo pods took: 100.232325ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:11:23.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2193" for this suite. • [SLOW TEST:49.423 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":5,"skipped":82,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:11:23.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1148 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1148 I0504 11:11:23.670839 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1148, replica count: 2 I0504 11:11:26.721322 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 11:11:29.721569 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 11:11:29.721: INFO: Creating new exec pod May 4 11:11:34.737: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1148 execpodgw4ph -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 4 11:11:35.001: INFO: stderr: "I0504 11:11:34.879355 126 log.go:172] (0xc0008fc0b0) (0xc0003f4aa0) Create stream\nI0504 11:11:34.879437 126 log.go:172] (0xc0008fc0b0) (0xc0003f4aa0) Stream added, broadcasting: 1\nI0504 11:11:34.883441 126 log.go:172] (0xc0008fc0b0) Reply frame received for 1\nI0504 11:11:34.883506 126 log.go:172] (0xc0008fc0b0) (0xc000631220) Create stream\nI0504 11:11:34.883534 126 log.go:172] (0xc0008fc0b0) (0xc000631220) Stream added, broadcasting: 3\nI0504 11:11:34.884458 126 log.go:172] (0xc0008fc0b0) Reply frame received for 3\nI0504 11:11:34.884488 126 log.go:172] (0xc0008fc0b0) (0xc0008ec000) Create stream\nI0504 11:11:34.884495 126 log.go:172] (0xc0008fc0b0) (0xc0008ec000) Stream added, broadcasting: 5\nI0504 11:11:34.885654 126 log.go:172] (0xc0008fc0b0) Reply frame received for 5\nI0504 11:11:34.992043 126 log.go:172] (0xc0008fc0b0) Data frame received for 5\nI0504 11:11:34.992077 126 log.go:172] (0xc0008ec000) (5) Data frame handling\nI0504 11:11:34.992091 126 log.go:172] (0xc0008ec000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0504 11:11:34.994118 126 log.go:172] (0xc0008fc0b0) Data frame received for 5\nI0504 11:11:34.994141 126 log.go:172] (0xc0008ec000) (5) Data frame handling\nI0504 11:11:34.994159 126 log.go:172] (0xc0008ec000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0504 11:11:34.994519 126 log.go:172] (0xc0008fc0b0) Data frame received for 3\nI0504 11:11:34.994543 126 log.go:172] (0xc000631220) (3) Data frame handling\nI0504 11:11:34.994998 126 log.go:172] (0xc0008fc0b0) Data frame received for 5\nI0504 11:11:34.995027 126 log.go:172] (0xc0008ec000) (5) Data frame handling\nI0504 11:11:34.996688 126 log.go:172] (0xc0008fc0b0) Data frame received for 1\nI0504 11:11:34.996703 126 log.go:172] (0xc0003f4aa0) (1) Data frame handling\nI0504 11:11:34.996716 126 log.go:172] (0xc0003f4aa0) (1) Data frame sent\nI0504 11:11:34.996728 126 log.go:172] (0xc0008fc0b0) (0xc0003f4aa0) Stream removed, broadcasting: 1\nI0504 11:11:34.996874 126 log.go:172] (0xc0008fc0b0) Go away received\nI0504 11:11:34.997108 126 log.go:172] (0xc0008fc0b0) (0xc0003f4aa0) Stream removed, broadcasting: 1\nI0504 11:11:34.997264 126 log.go:172] (0xc0008fc0b0) (0xc000631220) Stream removed, broadcasting: 3\nI0504 11:11:34.997275 126 log.go:172] (0xc0008fc0b0) (0xc0008ec000) Stream removed, broadcasting: 5\n" May 4 11:11:35.002: INFO: stdout: "" May 4 11:11:35.002: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1148 execpodgw4ph -- /bin/sh -x -c nc -zv -t -w 2 10.109.18.181 80' May 4 11:11:35.206: INFO: stderr: "I0504 11:11:35.123866 146 log.go:172] (0xc00003a420) (0xc000823180) Create stream\nI0504 11:11:35.123918 146 log.go:172] (0xc00003a420) (0xc000823180) Stream added, broadcasting: 1\nI0504 11:11:35.126215 146 log.go:172] (0xc00003a420) Reply frame received for 1\nI0504 11:11:35.126246 146 log.go:172] (0xc00003a420) (0xc00091e000) Create stream\nI0504 11:11:35.126253 146 log.go:172] (0xc00003a420) (0xc00091e000) Stream added, broadcasting: 3\nI0504 11:11:35.127144 146 log.go:172] (0xc00003a420) Reply frame received for 3\nI0504 11:11:35.127178 146 log.go:172] (0xc00003a420) (0xc00091e0a0) Create stream\nI0504 11:11:35.127190 146 log.go:172] (0xc00003a420) (0xc00091e0a0) Stream added, broadcasting: 5\nI0504 11:11:35.128070 146 log.go:172] (0xc00003a420) Reply frame received for 5\nI0504 11:11:35.199016 146 log.go:172] (0xc00003a420) Data frame received for 5\nI0504 11:11:35.199043 146 log.go:172] (0xc00091e0a0) (5) Data frame handling\nI0504 11:11:35.199050 146 log.go:172] (0xc00091e0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.109.18.181 80\nConnection to 10.109.18.181 80 port [tcp/http] succeeded!\nI0504 11:11:35.199063 146 log.go:172] (0xc00003a420) Data frame received for 3\nI0504 11:11:35.199067 146 log.go:172] (0xc00091e000) (3) Data frame handling\nI0504 11:11:35.199157 146 log.go:172] (0xc00003a420) Data frame received for 5\nI0504 11:11:35.199182 146 log.go:172] (0xc00091e0a0) (5) Data frame handling\nI0504 11:11:35.201029 146 log.go:172] (0xc00003a420) Data frame received for 1\nI0504 11:11:35.201067 146 log.go:172] (0xc000823180) (1) Data frame handling\nI0504 11:11:35.201088 146 log.go:172] (0xc000823180) (1) Data frame sent\nI0504 11:11:35.201294 146 log.go:172] (0xc00003a420) (0xc000823180) Stream removed, broadcasting: 1\nI0504 11:11:35.201389 146 log.go:172] (0xc00003a420) Go away received\nI0504 11:11:35.201804 146 log.go:172] (0xc00003a420) (0xc000823180) Stream removed, broadcasting: 1\nI0504 11:11:35.201841 146 log.go:172] (0xc00003a420) (0xc00091e000) Stream removed, broadcasting: 3\nI0504 11:11:35.201861 146 log.go:172] (0xc00003a420) (0xc00091e0a0) Stream removed, broadcasting: 5\n" May 4 11:11:35.206: INFO: stdout: "" May 4 11:11:35.206: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:11:35.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1148" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:11.792 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":6,"skipped":85,"failed":0} SSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:11:35.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 4 11:11:35.380: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b90db8fb-cf60-4e29-9948-867fbb54ed7d" in namespace "projected-8543" to be "Succeeded or Failed" May 4 11:11:35.417: INFO: Pod "downwardapi-volume-b90db8fb-cf60-4e29-9948-867fbb54ed7d": Phase="Pending", Reason="", readiness=false. Elapsed: 36.766644ms May 4 11:11:37.421: INFO: Pod "downwardapi-volume-b90db8fb-cf60-4e29-9948-867fbb54ed7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040556245s May 4 11:11:39.425: INFO: Pod "downwardapi-volume-b90db8fb-cf60-4e29-9948-867fbb54ed7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044771201s STEP: Saw pod success May 4 11:11:39.425: INFO: Pod "downwardapi-volume-b90db8fb-cf60-4e29-9948-867fbb54ed7d" satisfied condition "Succeeded or Failed" May 4 11:11:39.428: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-b90db8fb-cf60-4e29-9948-867fbb54ed7d container client-container: STEP: delete the pod May 4 11:11:39.486: INFO: Waiting for pod downwardapi-volume-b90db8fb-cf60-4e29-9948-867fbb54ed7d to disappear May 4 11:11:39.522: INFO: Pod downwardapi-volume-b90db8fb-cf60-4e29-9948-867fbb54ed7d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:11:39.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8543" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":7,"skipped":89,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:11:39.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode May 4 11:11:39.661: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1838" to be "Succeeded or Failed" May 4 11:11:39.706: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 44.712928ms May 4 11:11:41.710: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048784188s May 4 11:11:43.714: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052541997s May 4 11:11:45.718: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.056510667s STEP: Saw pod success May 4 11:11:45.718: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" May 4 11:11:45.723: INFO: Trying to get logs from node kali-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 4 11:11:45.799: INFO: Waiting for pod pod-host-path-test to disappear May 4 11:11:45.810: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:11:45.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1838" for this suite. • [SLOW TEST:6.287 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":114,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:11:45.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1665.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1665.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 4 11:11:52.014: INFO: DNS probes using dns-1665/dns-test-00c6ee03-b4ed-48ff-83f6-73199e845c63 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:11:52.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1665" for this suite. • [SLOW TEST:6.300 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":9,"skipped":124,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:11:52.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 4 11:11:53.110: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 4 11:11:55.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187513, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187513, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187513, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187512, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 11:11:58.197: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:11:58.201: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:11:59.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9184" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.585 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":10,"skipped":135,"failed":0} SSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:11:59.706: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:12:03.819: INFO: Waiting up to 5m0s for pod "client-envvars-dc5b59aa-e89e-4a08-89ac-e1a4508adee5" in namespace "pods-3013" to be "Succeeded or Failed" May 4 11:12:03.840: INFO: Pod "client-envvars-dc5b59aa-e89e-4a08-89ac-e1a4508adee5": Phase="Pending", Reason="", readiness=false. Elapsed: 20.550611ms May 4 11:12:05.844: INFO: Pod "client-envvars-dc5b59aa-e89e-4a08-89ac-e1a4508adee5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025292477s May 4 11:12:07.848: INFO: Pod "client-envvars-dc5b59aa-e89e-4a08-89ac-e1a4508adee5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028721521s STEP: Saw pod success May 4 11:12:07.848: INFO: Pod "client-envvars-dc5b59aa-e89e-4a08-89ac-e1a4508adee5" satisfied condition "Succeeded or Failed" May 4 11:12:07.850: INFO: Trying to get logs from node kali-worker2 pod client-envvars-dc5b59aa-e89e-4a08-89ac-e1a4508adee5 container env3cont: STEP: delete the pod May 4 11:12:07.901: INFO: Waiting for pod client-envvars-dc5b59aa-e89e-4a08-89ac-e1a4508adee5 to disappear May 4 11:12:07.909: INFO: Pod client-envvars-dc5b59aa-e89e-4a08-89ac-e1a4508adee5 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:12:07.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3013" for this suite. • [SLOW TEST:8.210 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":139,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:12:07.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:12:08.000: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 4 11:12:13.008: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 4 11:12:13.008: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 4 11:12:15.013: INFO: Creating deployment "test-rollover-deployment" May 4 11:12:15.026: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 4 11:12:17.031: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 4 11:12:17.038: INFO: Ensure that both replica sets have 1 created replica May 4 11:12:17.043: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 4 11:12:17.048: INFO: Updating deployment test-rollover-deployment May 4 11:12:17.048: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 4 11:12:19.118: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 4 11:12:19.124: INFO: Make sure deployment "test-rollover-deployment" is complete May 4 11:12:19.129: INFO: all replica sets need to contain the pod-template-hash label May 4 11:12:19.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187537, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 11:12:21.136: INFO: all replica sets need to contain the pod-template-hash label May 4 11:12:21.136: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187540, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 11:12:23.143: INFO: all replica sets need to contain the pod-template-hash label May 4 11:12:23.143: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187540, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 11:12:25.138: INFO: all replica sets need to contain the pod-template-hash label May 4 11:12:25.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187540, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 11:12:27.138: INFO: all replica sets need to contain the pod-template-hash label May 4 11:12:27.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187540, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 11:12:29.138: INFO: all replica sets need to contain the pod-template-hash label May 4 11:12:29.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187540, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187535, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 11:12:31.137: INFO: May 4 11:12:31.137: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 May 4 11:12:31.144: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6167 /apis/apps/v1/namespaces/deployment-6167/deployments/test-rollover-deployment 460e1f3a-a64d-4fc8-98fe-029413fc832b 1417028 2 2020-05-04 11:12:15 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-04 11:12:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-04 11:12:30 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0025ec918 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-04 11:12:15 +0000 UTC,LastTransitionTime:2020-05-04 11:12:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-05-04 11:12:30 +0000 UTC,LastTransitionTime:2020-05-04 11:12:15 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 4 11:12:31.148: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b deployment-6167 /apis/apps/v1/namespaces/deployment-6167/replicasets/test-rollover-deployment-84f7f6f64b 188d1841-6728-4683-a6b9-a6f97c697cf6 1417017 2 2020-05-04 11:12:17 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 460e1f3a-a64d-4fc8-98fe-029413fc832b 0xc00251e867 0xc00251e868}] [] [{kube-controller-manager Update apps/v1 2020-05-04 11:12:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 54 48 101 49 102 51 97 45 97 54 52 100 45 52 102 99 56 45 57 56 102 101 45 48 50 57 52 49 51 102 99 56 51 50 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00251e998 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 4 11:12:31.149: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 4 11:12:31.150: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6167 /apis/apps/v1/namespaces/deployment-6167/replicasets/test-rollover-controller 09ab08ff-d47c-4cac-affb-1785088165d6 1417027 2 2020-05-04 11:12:07 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 460e1f3a-a64d-4fc8-98fe-029413fc832b 0xc00251e407 0xc00251e408}] [] [{e2e.test Update apps/v1 2020-05-04 11:12:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-04 11:12:30 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 54 48 101 49 102 51 97 45 97 54 52 100 45 52 102 99 56 45 57 56 102 101 45 48 50 57 52 49 51 102 99 56 51 50 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00251e5a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 4 11:12:31.150: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-6167 /apis/apps/v1/namespaces/deployment-6167/replicasets/test-rollover-deployment-5686c4cfd5 635fadec-8ac1-4392-a657-3a39ef72abb7 1416968 2 2020-05-04 11:12:15 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 460e1f3a-a64d-4fc8-98fe-029413fc832b 0xc00251e617 0xc00251e618}] [] [{kube-controller-manager Update apps/v1 2020-05-04 11:12:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 54 48 101 49 102 51 97 45 97 54 52 100 45 52 102 99 56 45 57 56 102 101 45 48 50 57 52 49 51 102 99 56 51 50 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00251e758 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 4 11:12:31.153: INFO: Pod "test-rollover-deployment-84f7f6f64b-kwmgx" is available: &Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-kwmgx test-rollover-deployment-84f7f6f64b- deployment-6167 /api/v1/namespaces/deployment-6167/pods/test-rollover-deployment-84f7f6f64b-kwmgx 76e69ea7-7cb0-4018-ad84-b376f02e456f 1416982 0 2020-05-04 11:12:17 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 188d1841-6728-4683-a6b9-a6f97c697cf6 0xc0024b6187 0xc0024b6188}] [] [{kube-controller-manager Update v1 2020-05-04 11:12:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 56 56 100 49 56 52 49 45 54 55 50 56 45 52 54 56 51 45 97 54 98 57 45 97 54 102 57 55 99 54 57 55 99 102 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 11:12:20 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 56 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rqsxj,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rqsxj,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rqsxj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:12:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:12:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:12:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:12:17 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.86,StartTime:2020-05-04 11:12:17 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-04 11:12:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://71e9be13f4d45b6f23184be4010a5d4e1e4c64ad1afd83d2e7354bb6bb40e379,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.86,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:12:31.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6167" for this suite. • [SLOW TEST:23.245 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":12,"skipped":157,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:12:31.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:12:35.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6726" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":220,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:12:35.532: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 4 11:12:39.893: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:12:40.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2643" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":284,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:12:40.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-6a07bd98-aa56-4cbe-8794-2b1f5e16daf1 STEP: Creating configMap with name cm-test-opt-upd-6b6f699d-ffe2-4c70-b366-7c8bf3843697 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6a07bd98-aa56-4cbe-8794-2b1f5e16daf1 STEP: Updating configmap cm-test-opt-upd-6b6f699d-ffe2-4c70-b366-7c8bf3843697 STEP: Creating configMap with name cm-test-opt-create-b9be827f-9171-4749-bca5-58c3d30cab1c STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:12:48.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4190" for this suite. • [SLOW TEST:8.264 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":287,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:12:48.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2134.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2134.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2134.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2134.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2134.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2134.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 4 11:12:54.657: INFO: DNS probes using dns-2134/dns-test-208b22bd-3d7a-4eb8-a951-092bd14c5fc6 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:12:54.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2134" for this suite. • [SLOW TEST:6.772 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":16,"skipped":312,"failed":0} SSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:12:55.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 4 11:12:55.592: INFO: Created pod &Pod{ObjectMeta:{dns-8615 dns-8615 /api/v1/namespaces/dns-8615/pods/dns-8615 8b0e0c9d-7078-4b26-a401-db6efeef9142 1417256 0 2020-05-04 11:12:55 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-05-04 11:12:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hghnc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hghnc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hghnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 11:12:55.960: INFO: The status of Pod dns-8615 is Pending, waiting for it to be Running (with Ready = true) May 4 11:12:57.964: INFO: The status of Pod dns-8615 is Pending, waiting for it to be Running (with Ready = true) May 4 11:12:59.981: INFO: The status of Pod dns-8615 is Pending, waiting for it to be Running (with Ready = true) May 4 11:13:02.000: INFO: The status of Pod dns-8615 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... May 4 11:13:02.000: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-8615 PodName:dns-8615 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 11:13:02.000: INFO: >>> kubeConfig: /root/.kube/config I0504 11:13:02.065540 7 log.go:172] (0xc001e609a0) (0xc001adb9a0) Create stream I0504 11:13:02.065584 7 log.go:172] (0xc001e609a0) (0xc001adb9a0) Stream added, broadcasting: 1 I0504 11:13:02.068403 7 log.go:172] (0xc001e609a0) Reply frame received for 1 I0504 11:13:02.068457 7 log.go:172] (0xc001e609a0) (0xc001fc7040) Create stream I0504 11:13:02.068474 7 log.go:172] (0xc001e609a0) (0xc001fc7040) Stream added, broadcasting: 3 I0504 11:13:02.069540 7 log.go:172] (0xc001e609a0) Reply frame received for 3 I0504 11:13:02.069594 7 log.go:172] (0xc001e609a0) (0xc001b4e000) Create stream I0504 11:13:02.069612 7 log.go:172] (0xc001e609a0) (0xc001b4e000) Stream added, broadcasting: 5 I0504 11:13:02.070358 7 log.go:172] (0xc001e609a0) Reply frame received for 5 I0504 11:13:02.164670 7 log.go:172] (0xc001e609a0) Data frame received for 3 I0504 11:13:02.164689 7 log.go:172] (0xc001fc7040) (3) Data frame handling I0504 11:13:02.164698 7 log.go:172] (0xc001fc7040) (3) Data frame sent I0504 11:13:02.165821 7 log.go:172] (0xc001e609a0) Data frame received for 3 I0504 11:13:02.165839 7 log.go:172] (0xc001fc7040) (3) Data frame handling I0504 11:13:02.165947 7 log.go:172] (0xc001e609a0) Data frame received for 5 I0504 11:13:02.165966 7 log.go:172] (0xc001b4e000) (5) Data frame handling I0504 11:13:02.167688 7 log.go:172] (0xc001e609a0) Data frame received for 1 I0504 11:13:02.167712 7 log.go:172] (0xc001adb9a0) (1) Data frame handling I0504 11:13:02.167725 7 log.go:172] (0xc001adb9a0) (1) Data frame sent I0504 11:13:02.167740 7 log.go:172] (0xc001e609a0) (0xc001adb9a0) Stream removed, broadcasting: 1 I0504 11:13:02.167839 7 log.go:172] (0xc001e609a0) Go away received I0504 11:13:02.168055 7 log.go:172] (0xc001e609a0) (0xc001adb9a0) Stream removed, broadcasting: 1 I0504 11:13:02.168086 7 log.go:172] (0xc001e609a0) (0xc001fc7040) Stream removed, broadcasting: 3 I0504 11:13:02.168095 7 log.go:172] (0xc001e609a0) (0xc001b4e000) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 4 11:13:02.168: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-8615 PodName:dns-8615 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 11:13:02.168: INFO: >>> kubeConfig: /root/.kube/config I0504 11:13:02.194770 7 log.go:172] (0xc001e60fd0) (0xc001adbc20) Create stream I0504 11:13:02.194807 7 log.go:172] (0xc001e60fd0) (0xc001adbc20) Stream added, broadcasting: 1 I0504 11:13:02.197697 7 log.go:172] (0xc001e60fd0) Reply frame received for 1 I0504 11:13:02.197734 7 log.go:172] (0xc001e60fd0) (0xc001fc70e0) Create stream I0504 11:13:02.197749 7 log.go:172] (0xc001e60fd0) (0xc001fc70e0) Stream added, broadcasting: 3 I0504 11:13:02.198659 7 log.go:172] (0xc001e60fd0) Reply frame received for 3 I0504 11:13:02.198729 7 log.go:172] (0xc001e60fd0) (0xc001b04000) Create stream I0504 11:13:02.198751 7 log.go:172] (0xc001e60fd0) (0xc001b04000) Stream added, broadcasting: 5 I0504 11:13:02.199639 7 log.go:172] (0xc001e60fd0) Reply frame received for 5 I0504 11:13:02.274108 7 log.go:172] (0xc001e60fd0) Data frame received for 3 I0504 11:13:02.274147 7 log.go:172] (0xc001fc70e0) (3) Data frame handling I0504 11:13:02.274174 7 log.go:172] (0xc001fc70e0) (3) Data frame sent I0504 11:13:02.275279 7 log.go:172] (0xc001e60fd0) Data frame received for 3 I0504 11:13:02.275372 7 log.go:172] (0xc001fc70e0) (3) Data frame handling I0504 11:13:02.275418 7 log.go:172] (0xc001e60fd0) Data frame received for 5 I0504 11:13:02.275442 7 log.go:172] (0xc001b04000) (5) Data frame handling I0504 11:13:02.277026 7 log.go:172] (0xc001e60fd0) Data frame received for 1 I0504 11:13:02.277043 7 log.go:172] (0xc001adbc20) (1) Data frame handling I0504 11:13:02.277062 7 log.go:172] (0xc001adbc20) (1) Data frame sent I0504 11:13:02.277084 7 log.go:172] (0xc001e60fd0) (0xc001adbc20) Stream removed, broadcasting: 1 I0504 11:13:02.277240 7 log.go:172] (0xc001e60fd0) Go away received I0504 11:13:02.277325 7 log.go:172] (0xc001e60fd0) (0xc001adbc20) Stream removed, broadcasting: 1 I0504 11:13:02.277344 7 log.go:172] (0xc001e60fd0) (0xc001fc70e0) Stream removed, broadcasting: 3 I0504 11:13:02.277353 7 log.go:172] (0xc001e60fd0) (0xc001b04000) Stream removed, broadcasting: 5 May 4 11:13:02.277: INFO: Deleting pod dns-8615... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:13:02.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8615" for this suite. • [SLOW TEST:7.265 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":17,"skipped":317,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:13:02.336: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:13:06.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7694" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":334,"failed":0} ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:13:06.738: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 4 11:13:06.792: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. May 4 11:13:07.218: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 4 11:13:09.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187587, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187587, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187587, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187587, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 11:13:11.564: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187587, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187587, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187587, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187587, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 11:13:14.198: INFO: Waited 627.830423ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:13:14.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-6440" for this suite. • [SLOW TEST:8.222 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":19,"skipped":334,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:13:14.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:13:31.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6455" for this suite. • [SLOW TEST:16.354 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":20,"skipped":341,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:13:31.314: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:13:42.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2037" for this suite. • [SLOW TEST:11.216 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":21,"skipped":344,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:13:42.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 4 11:13:43.190: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 4 11:13:45.282: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187623, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187623, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187623, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724187623, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 11:13:48.368: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:13:48.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:13:49.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4245" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.071 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":22,"skipped":363,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:13:49.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 4 11:13:54.759: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:13:54.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1589" for this suite. • [SLOW TEST:5.346 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":23,"skipped":374,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:13:54.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 4 11:13:55.051: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 4 11:13:55.083: INFO: Waiting for terminating namespaces to be deleted... May 4 11:13:55.110: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 4 11:13:55.117: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:13:55.117: INFO: Container kindnet-cni ready: true, restart count 1 May 4 11:13:55.117: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:13:55.117: INFO: Container kube-proxy ready: true, restart count 0 May 4 11:13:55.117: INFO: pod-adoption-release from replicaset-1589 started at 2020-05-04 11:13:49 +0000 UTC (1 container statuses recorded) May 4 11:13:55.117: INFO: Container pod-adoption-release ready: true, restart count 0 May 4 11:13:55.117: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 4 11:13:55.148: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:13:55.149: INFO: Container kindnet-cni ready: true, restart count 0 May 4 11:13:55.149: INFO: pod-adoption-release-mlh2x from replicaset-1589 started at 2020-05-04 11:13:54 +0000 UTC (1 container statuses recorded) May 4 11:13:55.149: INFO: Container pod-adoption-release ready: false, restart count 0 May 4 11:13:55.149: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:13:55.149: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node kali-worker STEP: verifying the node has the label node kali-worker2 May 4 11:13:55.287: INFO: Pod kindnet-f8plf requesting resource cpu=100m on Node kali-worker May 4 11:13:55.287: INFO: Pod kindnet-mcdh2 requesting resource cpu=100m on Node kali-worker2 May 4 11:13:55.287: INFO: Pod kube-proxy-mmnb6 requesting resource cpu=0m on Node kali-worker2 May 4 11:13:55.287: INFO: Pod kube-proxy-vrswj requesting resource cpu=0m on Node kali-worker May 4 11:13:55.288: INFO: Pod pod-adoption-release requesting resource cpu=0m on Node kali-worker May 4 11:13:55.288: INFO: Pod pod-adoption-release-mlh2x requesting resource cpu=0m on Node kali-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 4 11:13:55.288: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2 May 4 11:13:55.292: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3e5bcf0e-e359-49d2-8dea-5585c95e1b67.160bcef2afac48f5], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4929/filler-pod-3e5bcf0e-e359-49d2-8dea-5585c95e1b67 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-3e5bcf0e-e359-49d2-8dea-5585c95e1b67.160bcef300762720], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3e5bcf0e-e359-49d2-8dea-5585c95e1b67.160bcef3740377c3], Reason = [Created], Message = [Created container filler-pod-3e5bcf0e-e359-49d2-8dea-5585c95e1b67] STEP: Considering event: Type = [Normal], Name = [filler-pod-3e5bcf0e-e359-49d2-8dea-5585c95e1b67.160bcef389c4872b], Reason = [Started], Message = [Started container filler-pod-3e5bcf0e-e359-49d2-8dea-5585c95e1b67] STEP: Considering event: Type = [Normal], Name = [filler-pod-744ccc11-9eed-4d1c-a46d-a5d6121c4768.160bcef2b1102192], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4929/filler-pod-744ccc11-9eed-4d1c-a46d-a5d6121c4768 to kali-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-744ccc11-9eed-4d1c-a46d-a5d6121c4768.160bcef34002ef32], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-744ccc11-9eed-4d1c-a46d-a5d6121c4768.160bcef3875cdd51], Reason = [Created], Message = [Created container filler-pod-744ccc11-9eed-4d1c-a46d-a5d6121c4768] STEP: Considering event: Type = [Normal], Name = [filler-pod-744ccc11-9eed-4d1c-a46d-a5d6121c4768.160bcef396fe54ac], Reason = [Started], Message = [Started container filler-pod-744ccc11-9eed-4d1c-a46d-a5d6121c4768] STEP: Considering event: Type = [Warning], Name = [additional-pod.160bcef418682fdf], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node kali-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node kali-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:14:02.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4929" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:7.470 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":24,"skipped":390,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:14:02.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:14:02.528: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:14:08.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5147" for this suite. • [SLOW TEST:6.229 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":413,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:14:08.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating all guestbook components May 4 11:14:08.815: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 4 11:14:08.815: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2708' May 4 11:14:09.185: INFO: stderr: "" May 4 11:14:09.185: INFO: stdout: "service/agnhost-slave created\n" May 4 11:14:09.185: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 4 11:14:09.185: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2708' May 4 11:14:09.477: INFO: stderr: "" May 4 11:14:09.477: INFO: stdout: "service/agnhost-master created\n" May 4 11:14:09.477: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 4 11:14:09.477: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2708' May 4 11:14:09.865: INFO: stderr: "" May 4 11:14:09.865: INFO: stdout: "service/frontend created\n" May 4 11:14:09.865: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 4 11:14:09.865: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2708' May 4 11:14:10.118: INFO: stderr: "" May 4 11:14:10.118: INFO: stdout: "deployment.apps/frontend created\n" May 4 11:14:10.119: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 4 11:14:10.119: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2708' May 4 11:14:10.422: INFO: stderr: "" May 4 11:14:10.422: INFO: stdout: "deployment.apps/agnhost-master created\n" May 4 11:14:10.423: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 4 11:14:10.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2708' May 4 11:14:10.750: INFO: stderr: "" May 4 11:14:10.750: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 4 11:14:10.750: INFO: Waiting for all frontend pods to be Running. May 4 11:14:20.801: INFO: Waiting for frontend to serve content. May 4 11:14:20.810: INFO: Trying to add a new entry to the guestbook. May 4 11:14:20.820: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 4 11:14:20.828: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2708' May 4 11:14:21.000: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 11:14:21.000: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 4 11:14:21.001: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2708' May 4 11:14:21.167: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 11:14:21.167: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 4 11:14:21.167: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2708' May 4 11:14:21.371: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 11:14:21.371: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 4 11:14:21.371: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2708' May 4 11:14:21.512: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 11:14:21.512: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 4 11:14:21.512: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2708' May 4 11:14:21.680: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 11:14:21.680: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 4 11:14:21.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2708' May 4 11:14:22.155: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 4 11:14:22.155: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:14:22.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2708" for this suite. • [SLOW TEST:13.843 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":275,"completed":26,"skipped":415,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:14:22.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-b9295444-f9dd-41c2-9151-f2776a95d56c STEP: Creating a pod to test consume configMaps May 4 11:14:23.165: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-573d32c7-c876-4aa9-9926-f6c32b0570b9" in namespace "projected-3589" to be "Succeeded or Failed" May 4 11:14:23.239: INFO: Pod "pod-projected-configmaps-573d32c7-c876-4aa9-9926-f6c32b0570b9": Phase="Pending", Reason="", readiness=false. Elapsed: 73.840927ms May 4 11:14:25.242: INFO: Pod "pod-projected-configmaps-573d32c7-c876-4aa9-9926-f6c32b0570b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077325901s May 4 11:14:27.247: INFO: Pod "pod-projected-configmaps-573d32c7-c876-4aa9-9926-f6c32b0570b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081902983s May 4 11:14:29.258: INFO: Pod "pod-projected-configmaps-573d32c7-c876-4aa9-9926-f6c32b0570b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093185798s STEP: Saw pod success May 4 11:14:29.258: INFO: Pod "pod-projected-configmaps-573d32c7-c876-4aa9-9926-f6c32b0570b9" satisfied condition "Succeeded or Failed" May 4 11:14:29.260: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-573d32c7-c876-4aa9-9926-f6c32b0570b9 container projected-configmap-volume-test: STEP: delete the pod May 4 11:14:29.307: INFO: Waiting for pod pod-projected-configmaps-573d32c7-c876-4aa9-9926-f6c32b0570b9 to disappear May 4 11:14:29.313: INFO: Pod pod-projected-configmaps-573d32c7-c876-4aa9-9926-f6c32b0570b9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:14:29.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3589" for this suite. • [SLOW TEST:6.829 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":456,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:14:29.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod May 4 11:14:29.411: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:14:36.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6686" for this suite. • [SLOW TEST:7.745 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":28,"skipped":457,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:14:37.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-b2440532-b135-45d6-97a0-14c857140c53 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:14:43.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5862" for this suite. • [SLOW TEST:6.227 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":533,"failed":0} SSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:14:43.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:14:43.412: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-554f6402-aa5d-4b10-83af-feed55aa270d" in namespace "security-context-test-7043" to be "Succeeded or Failed" May 4 11:14:43.422: INFO: Pod "alpine-nnp-false-554f6402-aa5d-4b10-83af-feed55aa270d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.788561ms May 4 11:14:45.535: INFO: Pod "alpine-nnp-false-554f6402-aa5d-4b10-83af-feed55aa270d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122749744s May 4 11:14:47.538: INFO: Pod "alpine-nnp-false-554f6402-aa5d-4b10-83af-feed55aa270d": Phase="Running", Reason="", readiness=true. Elapsed: 4.125988597s May 4 11:14:49.542: INFO: Pod "alpine-nnp-false-554f6402-aa5d-4b10-83af-feed55aa270d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130073979s May 4 11:14:49.542: INFO: Pod "alpine-nnp-false-554f6402-aa5d-4b10-83af-feed55aa270d" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:14:49.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-7043" for this suite. • [SLOW TEST:6.262 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":539,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:14:49.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-9b386338-f241-4ced-bfe4-4d259a604bba STEP: Creating a pod to test consume secrets May 4 11:14:49.829: INFO: Waiting up to 5m0s for pod "pod-secrets-fbad85a7-a162-42e9-b1c4-3f4fb31c873f" in namespace "secrets-9529" to be "Succeeded or Failed" May 4 11:14:49.859: INFO: Pod "pod-secrets-fbad85a7-a162-42e9-b1c4-3f4fb31c873f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.04248ms May 4 11:14:52.489: INFO: Pod "pod-secrets-fbad85a7-a162-42e9-b1c4-3f4fb31c873f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.660107747s May 4 11:14:54.494: INFO: Pod "pod-secrets-fbad85a7-a162-42e9-b1c4-3f4fb31c873f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.664461602s May 4 11:14:56.498: INFO: Pod "pod-secrets-fbad85a7-a162-42e9-b1c4-3f4fb31c873f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.669020481s STEP: Saw pod success May 4 11:14:56.498: INFO: Pod "pod-secrets-fbad85a7-a162-42e9-b1c4-3f4fb31c873f" satisfied condition "Succeeded or Failed" May 4 11:14:56.500: INFO: Trying to get logs from node kali-worker pod pod-secrets-fbad85a7-a162-42e9-b1c4-3f4fb31c873f container secret-volume-test: STEP: delete the pod May 4 11:14:56.532: INFO: Waiting for pod pod-secrets-fbad85a7-a162-42e9-b1c4-3f4fb31c873f to disappear May 4 11:14:56.548: INFO: Pod pod-secrets-fbad85a7-a162-42e9-b1c4-3f4fb31c873f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:14:56.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9529" for this suite. STEP: Destroying namespace "secret-namespace-2664" for this suite. • [SLOW TEST:7.004 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":553,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:14:56.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5973 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-5973 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5973 May 4 11:14:56.670: INFO: Found 0 stateful pods, waiting for 1 May 4 11:15:06.676: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 4 11:15:06.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 11:15:06.937: INFO: stderr: "I0504 11:15:06.809949 411 log.go:172] (0xc00096a160) (0xc000665360) Create stream\nI0504 11:15:06.810005 411 log.go:172] (0xc00096a160) (0xc000665360) Stream added, broadcasting: 1\nI0504 11:15:06.816650 411 log.go:172] (0xc00096a160) Reply frame received for 1\nI0504 11:15:06.816717 411 log.go:172] (0xc00096a160) (0xc000970000) Create stream\nI0504 11:15:06.816734 411 log.go:172] (0xc00096a160) (0xc000970000) Stream added, broadcasting: 3\nI0504 11:15:06.818036 411 log.go:172] (0xc00096a160) Reply frame received for 3\nI0504 11:15:06.818071 411 log.go:172] (0xc00096a160) (0xc000665400) Create stream\nI0504 11:15:06.818079 411 log.go:172] (0xc00096a160) (0xc000665400) Stream added, broadcasting: 5\nI0504 11:15:06.818850 411 log.go:172] (0xc00096a160) Reply frame received for 5\nI0504 11:15:06.898397 411 log.go:172] (0xc00096a160) Data frame received for 5\nI0504 11:15:06.898429 411 log.go:172] (0xc000665400) (5) Data frame handling\nI0504 11:15:06.898465 411 log.go:172] (0xc000665400) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0504 11:15:06.930689 411 log.go:172] (0xc00096a160) Data frame received for 3\nI0504 11:15:06.930717 411 log.go:172] (0xc000970000) (3) Data frame handling\nI0504 11:15:06.930738 411 log.go:172] (0xc000970000) (3) Data frame sent\nI0504 11:15:06.930964 411 log.go:172] (0xc00096a160) Data frame received for 3\nI0504 11:15:06.931009 411 log.go:172] (0xc000970000) (3) Data frame handling\nI0504 11:15:06.931053 411 log.go:172] (0xc00096a160) Data frame received for 5\nI0504 11:15:06.931082 411 log.go:172] (0xc000665400) (5) Data frame handling\nI0504 11:15:06.932880 411 log.go:172] (0xc00096a160) Data frame received for 1\nI0504 11:15:06.932891 411 log.go:172] (0xc000665360) (1) Data frame handling\nI0504 11:15:06.932907 411 log.go:172] (0xc000665360) (1) Data frame sent\nI0504 11:15:06.932918 411 log.go:172] (0xc00096a160) (0xc000665360) Stream removed, broadcasting: 1\nI0504 11:15:06.932997 411 log.go:172] (0xc00096a160) Go away received\nI0504 11:15:06.933289 411 log.go:172] (0xc00096a160) (0xc000665360) Stream removed, broadcasting: 1\nI0504 11:15:06.933303 411 log.go:172] (0xc00096a160) (0xc000970000) Stream removed, broadcasting: 3\nI0504 11:15:06.933309 411 log.go:172] (0xc00096a160) (0xc000665400) Stream removed, broadcasting: 5\n" May 4 11:15:06.937: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 11:15:06.937: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 11:15:06.961: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 4 11:15:16.966: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 4 11:15:16.966: INFO: Waiting for statefulset status.replicas updated to 0 May 4 11:15:17.000: INFO: POD NODE PHASE GRACE CONDITIONS May 4 11:15:17.000: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:06 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC }] May 4 11:15:17.000: INFO: May 4 11:15:17.000: INFO: StatefulSet ss has not reached scale 3, at 1 May 4 11:15:18.006: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98293616s May 4 11:15:19.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.977448544s May 4 11:15:20.268: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.73884704s May 4 11:15:21.290: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.71540789s May 4 11:15:22.296: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.692646303s May 4 11:15:23.321: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.687169476s May 4 11:15:24.326: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.662190544s May 4 11:15:25.330: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.656920097s May 4 11:15:26.335: INFO: Verifying statefulset ss doesn't scale past 3 for another 652.809167ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5973 May 4 11:15:27.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:15:27.571: INFO: stderr: "I0504 11:15:27.487387 434 log.go:172] (0xc00003a580) (0xc000952000) Create stream\nI0504 11:15:27.487448 434 log.go:172] (0xc00003a580) (0xc000952000) Stream added, broadcasting: 1\nI0504 11:15:27.490158 434 log.go:172] (0xc00003a580) Reply frame received for 1\nI0504 11:15:27.490193 434 log.go:172] (0xc00003a580) (0xc000a90000) Create stream\nI0504 11:15:27.490209 434 log.go:172] (0xc00003a580) (0xc000a90000) Stream added, broadcasting: 3\nI0504 11:15:27.491150 434 log.go:172] (0xc00003a580) Reply frame received for 3\nI0504 11:15:27.491189 434 log.go:172] (0xc00003a580) (0xc000490be0) Create stream\nI0504 11:15:27.491207 434 log.go:172] (0xc00003a580) (0xc000490be0) Stream added, broadcasting: 5\nI0504 11:15:27.492216 434 log.go:172] (0xc00003a580) Reply frame received for 5\nI0504 11:15:27.564937 434 log.go:172] (0xc00003a580) Data frame received for 5\nI0504 11:15:27.564972 434 log.go:172] (0xc000490be0) (5) Data frame handling\nI0504 11:15:27.564989 434 log.go:172] (0xc000490be0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0504 11:15:27.565035 434 log.go:172] (0xc00003a580) Data frame received for 5\nI0504 11:15:27.565049 434 log.go:172] (0xc000490be0) (5) Data frame handling\nI0504 11:15:27.565073 434 log.go:172] (0xc00003a580) Data frame received for 3\nI0504 11:15:27.565088 434 log.go:172] (0xc000a90000) (3) Data frame handling\nI0504 11:15:27.565104 434 log.go:172] (0xc000a90000) (3) Data frame sent\nI0504 11:15:27.565323 434 log.go:172] (0xc00003a580) Data frame received for 3\nI0504 11:15:27.565339 434 log.go:172] (0xc000a90000) (3) Data frame handling\nI0504 11:15:27.566514 434 log.go:172] (0xc00003a580) Data frame received for 1\nI0504 11:15:27.566529 434 log.go:172] (0xc000952000) (1) Data frame handling\nI0504 11:15:27.566539 434 log.go:172] (0xc000952000) (1) Data frame sent\nI0504 11:15:27.566639 434 log.go:172] (0xc00003a580) (0xc000952000) Stream removed, broadcasting: 1\nI0504 11:15:27.566693 434 log.go:172] (0xc00003a580) Go away received\nI0504 11:15:27.566993 434 log.go:172] (0xc00003a580) (0xc000952000) Stream removed, broadcasting: 1\nI0504 11:15:27.567019 434 log.go:172] (0xc00003a580) (0xc000a90000) Stream removed, broadcasting: 3\nI0504 11:15:27.567035 434 log.go:172] (0xc00003a580) (0xc000490be0) Stream removed, broadcasting: 5\n" May 4 11:15:27.571: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 11:15:27.571: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 11:15:27.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:15:27.771: INFO: stderr: "I0504 11:15:27.693789 455 log.go:172] (0xc0009a13f0) (0xc000a206e0) Create stream\nI0504 11:15:27.693843 455 log.go:172] (0xc0009a13f0) (0xc000a206e0) Stream added, broadcasting: 1\nI0504 11:15:27.700051 455 log.go:172] (0xc0009a13f0) Reply frame received for 1\nI0504 11:15:27.700092 455 log.go:172] (0xc0009a13f0) (0xc000551680) Create stream\nI0504 11:15:27.700101 455 log.go:172] (0xc0009a13f0) (0xc000551680) Stream added, broadcasting: 3\nI0504 11:15:27.700982 455 log.go:172] (0xc0009a13f0) Reply frame received for 3\nI0504 11:15:27.701011 455 log.go:172] (0xc0009a13f0) (0xc000a20000) Create stream\nI0504 11:15:27.701020 455 log.go:172] (0xc0009a13f0) (0xc000a20000) Stream added, broadcasting: 5\nI0504 11:15:27.702132 455 log.go:172] (0xc0009a13f0) Reply frame received for 5\nI0504 11:15:27.765961 455 log.go:172] (0xc0009a13f0) Data frame received for 3\nI0504 11:15:27.765985 455 log.go:172] (0xc000551680) (3) Data frame handling\nI0504 11:15:27.765992 455 log.go:172] (0xc000551680) (3) Data frame sent\nI0504 11:15:27.766013 455 log.go:172] (0xc0009a13f0) Data frame received for 5\nI0504 11:15:27.766044 455 log.go:172] (0xc000a20000) (5) Data frame handling\nI0504 11:15:27.766073 455 log.go:172] (0xc000a20000) (5) Data frame sent\nI0504 11:15:27.766094 455 log.go:172] (0xc0009a13f0) Data frame received for 5\nI0504 11:15:27.766105 455 log.go:172] (0xc000a20000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0504 11:15:27.766129 455 log.go:172] (0xc0009a13f0) Data frame received for 3\nI0504 11:15:27.766144 455 log.go:172] (0xc000551680) (3) Data frame handling\nI0504 11:15:27.767475 455 log.go:172] (0xc0009a13f0) Data frame received for 1\nI0504 11:15:27.767486 455 log.go:172] (0xc000a206e0) (1) Data frame handling\nI0504 11:15:27.767492 455 log.go:172] (0xc000a206e0) (1) Data frame sent\nI0504 11:15:27.767499 455 log.go:172] (0xc0009a13f0) (0xc000a206e0) Stream removed, broadcasting: 1\nI0504 11:15:27.767651 455 log.go:172] (0xc0009a13f0) Go away received\nI0504 11:15:27.767793 455 log.go:172] (0xc0009a13f0) (0xc000a206e0) Stream removed, broadcasting: 1\nI0504 11:15:27.767806 455 log.go:172] (0xc0009a13f0) (0xc000551680) Stream removed, broadcasting: 3\nI0504 11:15:27.767815 455 log.go:172] (0xc0009a13f0) (0xc000a20000) Stream removed, broadcasting: 5\n" May 4 11:15:27.771: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 11:15:27.771: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 11:15:27.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:15:27.965: INFO: stderr: "I0504 11:15:27.895186 478 log.go:172] (0xc0009cd290) (0xc000a3c5a0) Create stream\nI0504 11:15:27.895243 478 log.go:172] (0xc0009cd290) (0xc000a3c5a0) Stream added, broadcasting: 1\nI0504 11:15:27.900154 478 log.go:172] (0xc0009cd290) Reply frame received for 1\nI0504 11:15:27.900210 478 log.go:172] (0xc0009cd290) (0xc000a3c000) Create stream\nI0504 11:15:27.900223 478 log.go:172] (0xc0009cd290) (0xc000a3c000) Stream added, broadcasting: 3\nI0504 11:15:27.901326 478 log.go:172] (0xc0009cd290) Reply frame received for 3\nI0504 11:15:27.901392 478 log.go:172] (0xc0009cd290) (0xc0006415e0) Create stream\nI0504 11:15:27.901411 478 log.go:172] (0xc0009cd290) (0xc0006415e0) Stream added, broadcasting: 5\nI0504 11:15:27.902262 478 log.go:172] (0xc0009cd290) Reply frame received for 5\nI0504 11:15:27.960432 478 log.go:172] (0xc0009cd290) Data frame received for 3\nI0504 11:15:27.960456 478 log.go:172] (0xc000a3c000) (3) Data frame handling\nI0504 11:15:27.960477 478 log.go:172] (0xc0009cd290) Data frame received for 5\nI0504 11:15:27.960492 478 log.go:172] (0xc0006415e0) (5) Data frame handling\nI0504 11:15:27.960501 478 log.go:172] (0xc0006415e0) (5) Data frame sent\nI0504 11:15:27.960509 478 log.go:172] (0xc0009cd290) Data frame received for 5\nI0504 11:15:27.960515 478 log.go:172] (0xc0006415e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0504 11:15:27.960533 478 log.go:172] (0xc000a3c000) (3) Data frame sent\nI0504 11:15:27.960539 478 log.go:172] (0xc0009cd290) Data frame received for 3\nI0504 11:15:27.960545 478 log.go:172] (0xc000a3c000) (3) Data frame handling\nI0504 11:15:27.961838 478 log.go:172] (0xc0009cd290) Data frame received for 1\nI0504 11:15:27.961853 478 log.go:172] (0xc000a3c5a0) (1) Data frame handling\nI0504 11:15:27.961860 478 log.go:172] (0xc000a3c5a0) (1) Data frame sent\nI0504 11:15:27.961877 478 log.go:172] (0xc0009cd290) (0xc000a3c5a0) Stream removed, broadcasting: 1\nI0504 11:15:27.961902 478 log.go:172] (0xc0009cd290) Go away received\nI0504 11:15:27.962202 478 log.go:172] (0xc0009cd290) (0xc000a3c5a0) Stream removed, broadcasting: 1\nI0504 11:15:27.962214 478 log.go:172] (0xc0009cd290) (0xc000a3c000) Stream removed, broadcasting: 3\nI0504 11:15:27.962221 478 log.go:172] (0xc0009cd290) (0xc0006415e0) Stream removed, broadcasting: 5\n" May 4 11:15:27.966: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 11:15:27.966: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 11:15:27.969: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 4 11:15:37.975: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 4 11:15:37.975: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 4 11:15:37.975: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 4 11:15:37.979: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 11:15:38.189: INFO: stderr: "I0504 11:15:38.110950 499 log.go:172] (0xc00079cb00) (0xc000712320) Create stream\nI0504 11:15:38.111028 499 log.go:172] (0xc00079cb00) (0xc000712320) Stream added, broadcasting: 1\nI0504 11:15:38.114564 499 log.go:172] (0xc00079cb00) Reply frame received for 1\nI0504 11:15:38.114612 499 log.go:172] (0xc00079cb00) (0xc00047b180) Create stream\nI0504 11:15:38.114632 499 log.go:172] (0xc00079cb00) (0xc00047b180) Stream added, broadcasting: 3\nI0504 11:15:38.115514 499 log.go:172] (0xc00079cb00) Reply frame received for 3\nI0504 11:15:38.115562 499 log.go:172] (0xc00079cb00) (0xc00047b360) Create stream\nI0504 11:15:38.115577 499 log.go:172] (0xc00079cb00) (0xc00047b360) Stream added, broadcasting: 5\nI0504 11:15:38.116477 499 log.go:172] (0xc00079cb00) Reply frame received for 5\nI0504 11:15:38.182907 499 log.go:172] (0xc00079cb00) Data frame received for 5\nI0504 11:15:38.182943 499 log.go:172] (0xc00047b360) (5) Data frame handling\nI0504 11:15:38.182955 499 log.go:172] (0xc00047b360) (5) Data frame sent\nI0504 11:15:38.182963 499 log.go:172] (0xc00079cb00) Data frame received for 5\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0504 11:15:38.182971 499 log.go:172] (0xc00047b360) (5) Data frame handling\nI0504 11:15:38.183025 499 log.go:172] (0xc00079cb00) Data frame received for 3\nI0504 11:15:38.183065 499 log.go:172] (0xc00047b180) (3) Data frame handling\nI0504 11:15:38.183104 499 log.go:172] (0xc00047b180) (3) Data frame sent\nI0504 11:15:38.183116 499 log.go:172] (0xc00079cb00) Data frame received for 3\nI0504 11:15:38.183126 499 log.go:172] (0xc00047b180) (3) Data frame handling\nI0504 11:15:38.184520 499 log.go:172] (0xc00079cb00) Data frame received for 1\nI0504 11:15:38.184541 499 log.go:172] (0xc000712320) (1) Data frame handling\nI0504 11:15:38.184555 499 log.go:172] (0xc000712320) (1) Data frame sent\nI0504 11:15:38.184596 499 log.go:172] (0xc00079cb00) (0xc000712320) Stream removed, broadcasting: 1\nI0504 11:15:38.184638 499 log.go:172] (0xc00079cb00) Go away received\nI0504 11:15:38.184867 499 log.go:172] (0xc00079cb00) (0xc000712320) Stream removed, broadcasting: 1\nI0504 11:15:38.184889 499 log.go:172] (0xc00079cb00) (0xc00047b180) Stream removed, broadcasting: 3\nI0504 11:15:38.184900 499 log.go:172] (0xc00079cb00) (0xc00047b360) Stream removed, broadcasting: 5\n" May 4 11:15:38.189: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 11:15:38.189: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 11:15:38.189: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 11:15:38.413: INFO: stderr: "I0504 11:15:38.315728 522 log.go:172] (0xc0006c4000) (0xc00068b680) Create stream\nI0504 11:15:38.315778 522 log.go:172] (0xc0006c4000) (0xc00068b680) Stream added, broadcasting: 1\nI0504 11:15:38.317995 522 log.go:172] (0xc0006c4000) Reply frame received for 1\nI0504 11:15:38.318028 522 log.go:172] (0xc0006c4000) (0xc00068b720) Create stream\nI0504 11:15:38.318039 522 log.go:172] (0xc0006c4000) (0xc00068b720) Stream added, broadcasting: 3\nI0504 11:15:38.318922 522 log.go:172] (0xc0006c4000) Reply frame received for 3\nI0504 11:15:38.318959 522 log.go:172] (0xc0006c4000) (0xc000a66000) Create stream\nI0504 11:15:38.318972 522 log.go:172] (0xc0006c4000) (0xc000a66000) Stream added, broadcasting: 5\nI0504 11:15:38.319780 522 log.go:172] (0xc0006c4000) Reply frame received for 5\nI0504 11:15:38.375993 522 log.go:172] (0xc0006c4000) Data frame received for 5\nI0504 11:15:38.376021 522 log.go:172] (0xc000a66000) (5) Data frame handling\nI0504 11:15:38.376043 522 log.go:172] (0xc000a66000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0504 11:15:38.405882 522 log.go:172] (0xc0006c4000) Data frame received for 3\nI0504 11:15:38.405906 522 log.go:172] (0xc00068b720) (3) Data frame handling\nI0504 11:15:38.405913 522 log.go:172] (0xc00068b720) (3) Data frame sent\nI0504 11:15:38.405960 522 log.go:172] (0xc0006c4000) Data frame received for 5\nI0504 11:15:38.405985 522 log.go:172] (0xc000a66000) (5) Data frame handling\nI0504 11:15:38.406073 522 log.go:172] (0xc0006c4000) Data frame received for 3\nI0504 11:15:38.406103 522 log.go:172] (0xc00068b720) (3) Data frame handling\nI0504 11:15:38.407471 522 log.go:172] (0xc0006c4000) Data frame received for 1\nI0504 11:15:38.407492 522 log.go:172] (0xc00068b680) (1) Data frame handling\nI0504 11:15:38.407511 522 log.go:172] (0xc00068b680) (1) Data frame sent\nI0504 11:15:38.407529 522 log.go:172] (0xc0006c4000) (0xc00068b680) Stream removed, broadcasting: 1\nI0504 11:15:38.407549 522 log.go:172] (0xc0006c4000) Go away received\nI0504 11:15:38.407960 522 log.go:172] (0xc0006c4000) (0xc00068b680) Stream removed, broadcasting: 1\nI0504 11:15:38.407986 522 log.go:172] (0xc0006c4000) (0xc00068b720) Stream removed, broadcasting: 3\nI0504 11:15:38.408000 522 log.go:172] (0xc0006c4000) (0xc000a66000) Stream removed, broadcasting: 5\n" May 4 11:15:38.413: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 11:15:38.413: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 11:15:38.413: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 11:15:38.660: INFO: stderr: "I0504 11:15:38.546480 543 log.go:172] (0xc000af08f0) (0xc00060d400) Create stream\nI0504 11:15:38.546550 543 log.go:172] (0xc000af08f0) (0xc00060d400) Stream added, broadcasting: 1\nI0504 11:15:38.549533 543 log.go:172] (0xc000af08f0) Reply frame received for 1\nI0504 11:15:38.549584 543 log.go:172] (0xc000af08f0) (0xc00092c000) Create stream\nI0504 11:15:38.549600 543 log.go:172] (0xc000af08f0) (0xc00092c000) Stream added, broadcasting: 3\nI0504 11:15:38.550745 543 log.go:172] (0xc000af08f0) Reply frame received for 3\nI0504 11:15:38.550784 543 log.go:172] (0xc000af08f0) (0xc00060d4a0) Create stream\nI0504 11:15:38.550794 543 log.go:172] (0xc000af08f0) (0xc00060d4a0) Stream added, broadcasting: 5\nI0504 11:15:38.551912 543 log.go:172] (0xc000af08f0) Reply frame received for 5\nI0504 11:15:38.615568 543 log.go:172] (0xc000af08f0) Data frame received for 5\nI0504 11:15:38.615613 543 log.go:172] (0xc00060d4a0) (5) Data frame handling\nI0504 11:15:38.615661 543 log.go:172] (0xc00060d4a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0504 11:15:38.651624 543 log.go:172] (0xc000af08f0) Data frame received for 3\nI0504 11:15:38.651666 543 log.go:172] (0xc00092c000) (3) Data frame handling\nI0504 11:15:38.651702 543 log.go:172] (0xc00092c000) (3) Data frame sent\nI0504 11:15:38.651986 543 log.go:172] (0xc000af08f0) Data frame received for 5\nI0504 11:15:38.652003 543 log.go:172] (0xc00060d4a0) (5) Data frame handling\nI0504 11:15:38.652041 543 log.go:172] (0xc000af08f0) Data frame received for 3\nI0504 11:15:38.652074 543 log.go:172] (0xc00092c000) (3) Data frame handling\nI0504 11:15:38.654577 543 log.go:172] (0xc000af08f0) Data frame received for 1\nI0504 11:15:38.654605 543 log.go:172] (0xc00060d400) (1) Data frame handling\nI0504 11:15:38.654619 543 log.go:172] (0xc00060d400) (1) Data frame sent\nI0504 11:15:38.654633 543 log.go:172] (0xc000af08f0) (0xc00060d400) Stream removed, broadcasting: 1\nI0504 11:15:38.654660 543 log.go:172] (0xc000af08f0) Go away received\nI0504 11:15:38.655179 543 log.go:172] (0xc000af08f0) (0xc00060d400) Stream removed, broadcasting: 1\nI0504 11:15:38.655207 543 log.go:172] (0xc000af08f0) (0xc00092c000) Stream removed, broadcasting: 3\nI0504 11:15:38.655220 543 log.go:172] (0xc000af08f0) (0xc00060d4a0) Stream removed, broadcasting: 5\n" May 4 11:15:38.660: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 11:15:38.660: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 11:15:38.660: INFO: Waiting for statefulset status.replicas updated to 0 May 4 11:15:38.664: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 4 11:15:48.673: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 4 11:15:48.673: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 4 11:15:48.673: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 4 11:15:48.692: INFO: POD NODE PHASE GRACE CONDITIONS May 4 11:15:48.692: INFO: ss-0 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC }] May 4 11:15:48.692: INFO: ss-1 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC }] May 4 11:15:48.692: INFO: ss-2 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC }] May 4 11:15:48.692: INFO: May 4 11:15:48.692: INFO: StatefulSet ss has not reached scale 0, at 3 May 4 11:15:49.698: INFO: POD NODE PHASE GRACE CONDITIONS May 4 11:15:49.698: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC }] May 4 11:15:49.698: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC }] May 4 11:15:49.698: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC }] May 4 11:15:49.699: INFO: May 4 11:15:49.699: INFO: StatefulSet ss has not reached scale 0, at 3 May 4 11:15:50.723: INFO: POD NODE PHASE GRACE CONDITIONS May 4 11:15:50.723: INFO: ss-0 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC }] May 4 11:15:50.723: INFO: ss-1 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC }] May 4 11:15:50.723: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC }] May 4 11:15:50.723: INFO: May 4 11:15:50.723: INFO: StatefulSet ss has not reached scale 0, at 3 May 4 11:15:51.728: INFO: POD NODE PHASE GRACE CONDITIONS May 4 11:15:51.728: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC }] May 4 11:15:51.728: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC }] May 4 11:15:51.729: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC }] May 4 11:15:51.729: INFO: May 4 11:15:51.729: INFO: StatefulSet ss has not reached scale 0, at 3 May 4 11:15:52.734: INFO: POD NODE PHASE GRACE CONDITIONS May 4 11:15:52.734: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC }] May 4 11:15:52.735: INFO: ss-1 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC }] May 4 11:15:52.735: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:17 +0000 UTC }] May 4 11:15:52.735: INFO: May 4 11:15:52.735: INFO: StatefulSet ss has not reached scale 0, at 3 May 4 11:15:53.747: INFO: POD NODE PHASE GRACE CONDITIONS May 4 11:15:53.748: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC }] May 4 11:15:53.748: INFO: May 4 11:15:53.748: INFO: StatefulSet ss has not reached scale 0, at 1 May 4 11:15:54.752: INFO: POD NODE PHASE GRACE CONDITIONS May 4 11:15:54.752: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC }] May 4 11:15:54.752: INFO: May 4 11:15:54.752: INFO: StatefulSet ss has not reached scale 0, at 1 May 4 11:15:55.757: INFO: POD NODE PHASE GRACE CONDITIONS May 4 11:15:55.757: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC }] May 4 11:15:55.757: INFO: May 4 11:15:55.757: INFO: StatefulSet ss has not reached scale 0, at 1 May 4 11:15:56.762: INFO: POD NODE PHASE GRACE CONDITIONS May 4 11:15:56.762: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC }] May 4 11:15:56.762: INFO: May 4 11:15:56.762: INFO: StatefulSet ss has not reached scale 0, at 1 May 4 11:15:57.767: INFO: POD NODE PHASE GRACE CONDITIONS May 4 11:15:57.767: INFO: ss-0 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:15:38 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-04 11:14:56 +0000 UTC }] May 4 11:15:57.767: INFO: May 4 11:15:57.767: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5973 May 4 11:15:58.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:15:58.919: INFO: rc: 1 May 4 11:15:58.919: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 May 4 11:16:08.920: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:16:09.015: INFO: rc: 1 May 4 11:16:09.015: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:16:19.016: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:16:19.122: INFO: rc: 1 May 4 11:16:19.122: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:16:29.122: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:16:29.226: INFO: rc: 1 May 4 11:16:29.226: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:16:39.226: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:16:39.321: INFO: rc: 1 May 4 11:16:39.321: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:16:49.322: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:16:49.429: INFO: rc: 1 May 4 11:16:49.429: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:16:59.430: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:16:59.590: INFO: rc: 1 May 4 11:16:59.590: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:17:09.590: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:17:09.692: INFO: rc: 1 May 4 11:17:09.692: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:17:19.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:17:19.789: INFO: rc: 1 May 4 11:17:19.789: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:17:29.789: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:17:29.893: INFO: rc: 1 May 4 11:17:29.893: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:17:39.893: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:17:40.018: INFO: rc: 1 May 4 11:17:40.018: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:17:50.018: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:17:50.115: INFO: rc: 1 May 4 11:17:50.115: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:18:00.116: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:18:00.224: INFO: rc: 1 May 4 11:18:00.224: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:18:10.224: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:18:10.326: INFO: rc: 1 May 4 11:18:10.326: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:18:20.327: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:18:20.436: INFO: rc: 1 May 4 11:18:20.437: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:18:30.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:18:30.538: INFO: rc: 1 May 4 11:18:30.538: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:18:40.538: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:18:40.637: INFO: rc: 1 May 4 11:18:40.637: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:18:50.638: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:18:50.734: INFO: rc: 1 May 4 11:18:50.734: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:19:00.734: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:19:00.838: INFO: rc: 1 May 4 11:19:00.838: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:19:10.839: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:19:10.939: INFO: rc: 1 May 4 11:19:10.939: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:19:20.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:19:21.035: INFO: rc: 1 May 4 11:19:21.035: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:19:31.036: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:19:31.130: INFO: rc: 1 May 4 11:19:31.130: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:19:41.131: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:19:41.236: INFO: rc: 1 May 4 11:19:41.236: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:19:51.237: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:19:51.348: INFO: rc: 1 May 4 11:19:51.348: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:20:01.349: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:20:01.455: INFO: rc: 1 May 4 11:20:01.455: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:20:11.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:20:11.560: INFO: rc: 1 May 4 11:20:11.560: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:20:21.560: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:20:21.776: INFO: rc: 1 May 4 11:20:21.776: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:20:31.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:20:33.500: INFO: rc: 1 May 4 11:20:33.500: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:20:43.500: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:20:44.703: INFO: rc: 1 May 4 11:20:44.703: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:20:54.704: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:20:54.797: INFO: rc: 1 May 4 11:20:54.797: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 May 4 11:21:04.798: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5973 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:21:04.894: INFO: rc: 1 May 4 11:21:04.894: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: May 4 11:21:04.894: INFO: Scaling statefulset ss to 0 May 4 11:21:04.910: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 4 11:21:04.912: INFO: Deleting all statefulset in ns statefulset-5973 May 4 11:21:04.914: INFO: Scaling statefulset ss to 0 May 4 11:21:04.920: INFO: Waiting for statefulset status.replicas updated to 0 May 4 11:21:04.922: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:21:04.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5973" for this suite. • [SLOW TEST:368.382 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":32,"skipped":567,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:21:04.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-dp2r STEP: Creating a pod to test atomic-volume-subpath May 4 11:21:05.071: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-dp2r" in namespace "subpath-8788" to be "Succeeded or Failed" May 4 11:21:05.075: INFO: Pod "pod-subpath-test-configmap-dp2r": Phase="Pending", Reason="", readiness=false. Elapsed: 3.608903ms May 4 11:21:07.263: INFO: Pod "pod-subpath-test-configmap-dp2r": Phase="Pending", Reason="", readiness=false. Elapsed: 2.191763535s May 4 11:21:09.281: INFO: Pod "pod-subpath-test-configmap-dp2r": Phase="Running", Reason="", readiness=true. Elapsed: 4.209436468s May 4 11:21:11.286: INFO: Pod "pod-subpath-test-configmap-dp2r": Phase="Running", Reason="", readiness=true. Elapsed: 6.214376552s May 4 11:21:13.290: INFO: Pod "pod-subpath-test-configmap-dp2r": Phase="Running", Reason="", readiness=true. Elapsed: 8.218514992s May 4 11:21:15.294: INFO: Pod "pod-subpath-test-configmap-dp2r": Phase="Running", Reason="", readiness=true. Elapsed: 10.222898081s May 4 11:21:17.314: INFO: Pod "pod-subpath-test-configmap-dp2r": Phase="Running", Reason="", readiness=true. Elapsed: 12.24288961s May 4 11:21:19.319: INFO: Pod "pod-subpath-test-configmap-dp2r": Phase="Running", Reason="", readiness=true. Elapsed: 14.247136002s May 4 11:21:21.341: INFO: Pod "pod-subpath-test-configmap-dp2r": Phase="Running", Reason="", readiness=true. Elapsed: 16.269050954s May 4 11:21:23.345: INFO: Pod "pod-subpath-test-configmap-dp2r": Phase="Running", Reason="", readiness=true. Elapsed: 18.273290967s May 4 11:21:25.371: INFO: Pod "pod-subpath-test-configmap-dp2r": Phase="Running", Reason="", readiness=true. Elapsed: 20.299175944s May 4 11:21:27.407: INFO: Pod "pod-subpath-test-configmap-dp2r": Phase="Running", Reason="", readiness=true. Elapsed: 22.335543963s May 4 11:21:29.478: INFO: Pod "pod-subpath-test-configmap-dp2r": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.406645s STEP: Saw pod success May 4 11:21:29.478: INFO: Pod "pod-subpath-test-configmap-dp2r" satisfied condition "Succeeded or Failed" May 4 11:21:29.481: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-dp2r container test-container-subpath-configmap-dp2r: STEP: delete the pod May 4 11:21:29.615: INFO: Waiting for pod pod-subpath-test-configmap-dp2r to disappear May 4 11:21:29.627: INFO: Pod pod-subpath-test-configmap-dp2r no longer exists STEP: Deleting pod pod-subpath-test-configmap-dp2r May 4 11:21:29.627: INFO: Deleting pod "pod-subpath-test-configmap-dp2r" in namespace "subpath-8788" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:21:29.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8788" for this suite. • [SLOW TEST:24.694 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":33,"skipped":574,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:21:29.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-f7l2 STEP: Creating a pod to test atomic-volume-subpath May 4 11:21:29.799: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-f7l2" in namespace "subpath-5120" to be "Succeeded or Failed" May 4 11:21:29.802: INFO: Pod "pod-subpath-test-downwardapi-f7l2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.870778ms May 4 11:21:31.805: INFO: Pod "pod-subpath-test-downwardapi-f7l2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006589303s May 4 11:21:33.810: INFO: Pod "pod-subpath-test-downwardapi-f7l2": Phase="Running", Reason="", readiness=true. Elapsed: 4.011041647s May 4 11:21:35.814: INFO: Pod "pod-subpath-test-downwardapi-f7l2": Phase="Running", Reason="", readiness=true. Elapsed: 6.015403998s May 4 11:21:37.818: INFO: Pod "pod-subpath-test-downwardapi-f7l2": Phase="Running", Reason="", readiness=true. Elapsed: 8.019502753s May 4 11:21:39.822: INFO: Pod "pod-subpath-test-downwardapi-f7l2": Phase="Running", Reason="", readiness=true. Elapsed: 10.023729051s May 4 11:21:41.826: INFO: Pod "pod-subpath-test-downwardapi-f7l2": Phase="Running", Reason="", readiness=true. Elapsed: 12.027790714s May 4 11:21:43.831: INFO: Pod "pod-subpath-test-downwardapi-f7l2": Phase="Running", Reason="", readiness=true. Elapsed: 14.031874656s May 4 11:21:45.834: INFO: Pod "pod-subpath-test-downwardapi-f7l2": Phase="Running", Reason="", readiness=true. Elapsed: 16.03528332s May 4 11:21:47.838: INFO: Pod "pod-subpath-test-downwardapi-f7l2": Phase="Running", Reason="", readiness=true. Elapsed: 18.039215336s May 4 11:21:49.843: INFO: Pod "pod-subpath-test-downwardapi-f7l2": Phase="Running", Reason="", readiness=true. Elapsed: 20.043996556s May 4 11:21:51.847: INFO: Pod "pod-subpath-test-downwardapi-f7l2": Phase="Running", Reason="", readiness=true. Elapsed: 22.048088816s May 4 11:21:53.851: INFO: Pod "pod-subpath-test-downwardapi-f7l2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052196958s STEP: Saw pod success May 4 11:21:53.851: INFO: Pod "pod-subpath-test-downwardapi-f7l2" satisfied condition "Succeeded or Failed" May 4 11:21:53.854: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-downwardapi-f7l2 container test-container-subpath-downwardapi-f7l2: STEP: delete the pod May 4 11:21:53.965: INFO: Waiting for pod pod-subpath-test-downwardapi-f7l2 to disappear May 4 11:21:53.987: INFO: Pod pod-subpath-test-downwardapi-f7l2 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-f7l2 May 4 11:21:53.987: INFO: Deleting pod "pod-subpath-test-downwardapi-f7l2" in namespace "subpath-5120" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:21:53.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5120" for this suite. • [SLOW TEST:24.397 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":34,"skipped":587,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:21:54.037: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:21:54.229: INFO: Creating deployment "test-recreate-deployment" May 4 11:21:54.237: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 4 11:21:54.242: INFO: new replicaset for deployment "test-recreate-deployment" is yet to be created May 4 11:21:56.348: INFO: Waiting deployment "test-recreate-deployment" to complete May 4 11:21:56.351: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188114, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188114, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188114, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188114, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 11:21:58.355: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 4 11:21:58.363: INFO: Updating deployment test-recreate-deployment May 4 11:21:58.363: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 May 4 11:21:59.034: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-523 /apis/apps/v1/namespaces/deployment-523/deployments/test-recreate-deployment 60402c22-1349-4478-b733-5576f7201e77 1419877 2 2020-05-04 11:21:54 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-05-04 11:21:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-04 11:21:58 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0036d3d08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-04 11:21:58 +0000 UTC,LastTransitionTime:2020-05-04 11:21:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-05-04 11:21:58 +0000 UTC,LastTransitionTime:2020-05-04 11:21:54 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 4 11:21:59.049: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7 deployment-523 /apis/apps/v1/namespaces/deployment-523/replicasets/test-recreate-deployment-d5667d9c7 2f562537-e605-460c-a48f-7a590c573d81 1419874 1 2020-05-04 11:21:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 60402c22-1349-4478-b733-5576f7201e77 0xc003598220 0xc003598221}] [] [{kube-controller-manager Update apps/v1 2020-05-04 11:21:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 52 48 50 99 50 50 45 49 51 52 57 45 52 52 55 56 45 98 55 51 51 45 53 53 55 54 102 55 50 48 49 101 55 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003598298 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 4 11:21:59.049: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 4 11:21:59.049: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c deployment-523 /apis/apps/v1/namespaces/deployment-523/replicasets/test-recreate-deployment-74d98b5f7c 6d0b6a26-b15b-4b6e-be74-5b00e0929e73 1419865 2 2020-05-04 11:21:54 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 60402c22-1349-4478-b733-5576f7201e77 0xc003598127 0xc003598128}] [] [{kube-controller-manager Update apps/v1 2020-05-04 11:21:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 48 52 48 50 99 50 50 45 49 51 52 57 45 52 52 55 56 45 98 55 51 51 45 53 53 55 54 102 55 50 48 49 101 55 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035981b8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 4 11:21:59.053: INFO: Pod "test-recreate-deployment-d5667d9c7-8dg5q" is not available: &Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-8dg5q test-recreate-deployment-d5667d9c7- deployment-523 /api/v1/namespaces/deployment-523/pods/test-recreate-deployment-d5667d9c7-8dg5q 6b2234c6-0efc-4cfa-b95b-485d88d9d781 1419878 0 2020-05-04 11:21:58 +0000 UTC map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 2f562537-e605-460c-a48f-7a590c573d81 0xc003598770 0xc003598771}] [] [{kube-controller-manager Update v1 2020-05-04 11:21:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 102 53 54 50 53 51 55 45 101 54 48 53 45 52 54 48 99 45 97 52 56 102 45 55 97 53 57 48 99 53 55 51 100 56 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 11:21:59 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-lktfh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-lktfh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-lktfh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:21:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:21:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:21:58 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:21:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-04 11:21:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:21:59.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-523" for this suite. • [SLOW TEST:5.022 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":35,"skipped":613,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:21:59.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-6687 STEP: creating a selector STEP: Creating the service pods in kubernetes May 4 11:21:59.283: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 4 11:21:59.467: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 11:22:01.470: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 11:22:03.471: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 11:22:05.473: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 11:22:07.471: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 11:22:09.471: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 11:22:11.471: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 11:22:13.471: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 11:22:15.471: INFO: The status of Pod netserver-0 is Running (Ready = true) May 4 11:22:15.475: INFO: The status of Pod netserver-1 is Running (Ready = false) May 4 11:22:17.479: INFO: The status of Pod netserver-1 is Running (Ready = false) May 4 11:22:19.479: INFO: The status of Pod netserver-1 is Running (Ready = false) May 4 11:22:21.480: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 4 11:22:27.552: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.103:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6687 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 11:22:27.552: INFO: >>> kubeConfig: /root/.kube/config I0504 11:22:27.581945 7 log.go:172] (0xc001e60840) (0xc0024de3c0) Create stream I0504 11:22:27.581976 7 log.go:172] (0xc001e60840) (0xc0024de3c0) Stream added, broadcasting: 1 I0504 11:22:27.584105 7 log.go:172] (0xc001e60840) Reply frame received for 1 I0504 11:22:27.584159 7 log.go:172] (0xc001e60840) (0xc002a9b180) Create stream I0504 11:22:27.584175 7 log.go:172] (0xc001e60840) (0xc002a9b180) Stream added, broadcasting: 3 I0504 11:22:27.585006 7 log.go:172] (0xc001e60840) Reply frame received for 3 I0504 11:22:27.585042 7 log.go:172] (0xc001e60840) (0xc002a9b220) Create stream I0504 11:22:27.585051 7 log.go:172] (0xc001e60840) (0xc002a9b220) Stream added, broadcasting: 5 I0504 11:22:27.586097 7 log.go:172] (0xc001e60840) Reply frame received for 5 I0504 11:22:27.668469 7 log.go:172] (0xc001e60840) Data frame received for 3 I0504 11:22:27.668531 7 log.go:172] (0xc002a9b180) (3) Data frame handling I0504 11:22:27.668582 7 log.go:172] (0xc002a9b180) (3) Data frame sent I0504 11:22:27.668826 7 log.go:172] (0xc001e60840) Data frame received for 5 I0504 11:22:27.668847 7 log.go:172] (0xc002a9b220) (5) Data frame handling I0504 11:22:27.668885 7 log.go:172] (0xc001e60840) Data frame received for 3 I0504 11:22:27.668914 7 log.go:172] (0xc002a9b180) (3) Data frame handling I0504 11:22:27.670818 7 log.go:172] (0xc001e60840) Data frame received for 1 I0504 11:22:27.670862 7 log.go:172] (0xc0024de3c0) (1) Data frame handling I0504 11:22:27.670893 7 log.go:172] (0xc0024de3c0) (1) Data frame sent I0504 11:22:27.670917 7 log.go:172] (0xc001e60840) (0xc0024de3c0) Stream removed, broadcasting: 1 I0504 11:22:27.670941 7 log.go:172] (0xc001e60840) Go away received I0504 11:22:27.671111 7 log.go:172] (0xc001e60840) (0xc0024de3c0) Stream removed, broadcasting: 1 I0504 11:22:27.671141 7 log.go:172] (0xc001e60840) (0xc002a9b180) Stream removed, broadcasting: 3 I0504 11:22:27.671150 7 log.go:172] (0xc001e60840) (0xc002a9b220) Stream removed, broadcasting: 5 May 4 11:22:27.671: INFO: Found all expected endpoints: [netserver-0] May 4 11:22:27.675: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.52:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6687 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 11:22:27.675: INFO: >>> kubeConfig: /root/.kube/config I0504 11:22:27.708792 7 log.go:172] (0xc001ec0790) (0xc0025659a0) Create stream I0504 11:22:27.708822 7 log.go:172] (0xc001ec0790) (0xc0025659a0) Stream added, broadcasting: 1 I0504 11:22:27.711631 7 log.go:172] (0xc001ec0790) Reply frame received for 1 I0504 11:22:27.711675 7 log.go:172] (0xc001ec0790) (0xc0024de460) Create stream I0504 11:22:27.711685 7 log.go:172] (0xc001ec0790) (0xc0024de460) Stream added, broadcasting: 3 I0504 11:22:27.712633 7 log.go:172] (0xc001ec0790) Reply frame received for 3 I0504 11:22:27.712684 7 log.go:172] (0xc001ec0790) (0xc0024de5a0) Create stream I0504 11:22:27.712701 7 log.go:172] (0xc001ec0790) (0xc0024de5a0) Stream added, broadcasting: 5 I0504 11:22:27.713948 7 log.go:172] (0xc001ec0790) Reply frame received for 5 I0504 11:22:27.776058 7 log.go:172] (0xc001ec0790) Data frame received for 5 I0504 11:22:27.776092 7 log.go:172] (0xc0024de5a0) (5) Data frame handling I0504 11:22:27.776113 7 log.go:172] (0xc001ec0790) Data frame received for 3 I0504 11:22:27.776124 7 log.go:172] (0xc0024de460) (3) Data frame handling I0504 11:22:27.776169 7 log.go:172] (0xc0024de460) (3) Data frame sent I0504 11:22:27.776187 7 log.go:172] (0xc001ec0790) Data frame received for 3 I0504 11:22:27.776197 7 log.go:172] (0xc0024de460) (3) Data frame handling I0504 11:22:27.777713 7 log.go:172] (0xc001ec0790) Data frame received for 1 I0504 11:22:27.777763 7 log.go:172] (0xc0025659a0) (1) Data frame handling I0504 11:22:27.777792 7 log.go:172] (0xc0025659a0) (1) Data frame sent I0504 11:22:27.777806 7 log.go:172] (0xc001ec0790) (0xc0025659a0) Stream removed, broadcasting: 1 I0504 11:22:27.777816 7 log.go:172] (0xc001ec0790) Go away received I0504 11:22:27.777962 7 log.go:172] (0xc001ec0790) (0xc0025659a0) Stream removed, broadcasting: 1 I0504 11:22:27.777978 7 log.go:172] (0xc001ec0790) (0xc0024de460) Stream removed, broadcasting: 3 I0504 11:22:27.777986 7 log.go:172] (0xc001ec0790) (0xc0024de5a0) Stream removed, broadcasting: 5 May 4 11:22:27.778: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:22:27.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6687" for this suite. • [SLOW TEST:28.727 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":36,"skipped":633,"failed":0} SSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:22:27.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:22:27.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7882" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":37,"skipped":638,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:22:27.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 11:22:28.543: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 11:22:30.559: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188148, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188148, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188148, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188148, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 11:22:33.629: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:22:33.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3375" for this suite. STEP: Destroying namespace "webhook-3375-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.870 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":38,"skipped":653,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:22:34.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs May 4 11:22:36.087: INFO: Waiting up to 5m0s for pod "pod-202c96aa-5669-45f9-bfd9-7ff20a51b7b4" in namespace "emptydir-7035" to be "Succeeded or Failed" May 4 11:22:36.139: INFO: Pod "pod-202c96aa-5669-45f9-bfd9-7ff20a51b7b4": Phase="Pending", Reason="", readiness=false. Elapsed: 52.228817ms May 4 11:22:38.159: INFO: Pod "pod-202c96aa-5669-45f9-bfd9-7ff20a51b7b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07241599s May 4 11:22:40.163: INFO: Pod "pod-202c96aa-5669-45f9-bfd9-7ff20a51b7b4": Phase="Running", Reason="", readiness=true. Elapsed: 4.076426281s May 4 11:22:42.168: INFO: Pod "pod-202c96aa-5669-45f9-bfd9-7ff20a51b7b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.080939582s STEP: Saw pod success May 4 11:22:42.168: INFO: Pod "pod-202c96aa-5669-45f9-bfd9-7ff20a51b7b4" satisfied condition "Succeeded or Failed" May 4 11:22:42.171: INFO: Trying to get logs from node kali-worker pod pod-202c96aa-5669-45f9-bfd9-7ff20a51b7b4 container test-container: STEP: delete the pod May 4 11:22:42.201: INFO: Waiting for pod pod-202c96aa-5669-45f9-bfd9-7ff20a51b7b4 to disappear May 4 11:22:42.217: INFO: Pod pod-202c96aa-5669-45f9-bfd9-7ff20a51b7b4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:22:42.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7035" for this suite. • [SLOW TEST:7.445 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":39,"skipped":664,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:22:42.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:22:55.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4687" for this suite. • [SLOW TEST:13.237 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":40,"skipped":685,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:22:55.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium May 4 11:22:55.554: INFO: Waiting up to 5m0s for pod "pod-43ce72c1-6962-4321-8a1d-df911fe6f9dc" in namespace "emptydir-567" to be "Succeeded or Failed" May 4 11:22:55.572: INFO: Pod "pod-43ce72c1-6962-4321-8a1d-df911fe6f9dc": Phase="Pending", Reason="", readiness=false. Elapsed: 18.602106ms May 4 11:22:57.576: INFO: Pod "pod-43ce72c1-6962-4321-8a1d-df911fe6f9dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022733928s May 4 11:22:59.581: INFO: Pod "pod-43ce72c1-6962-4321-8a1d-df911fe6f9dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027506816s STEP: Saw pod success May 4 11:22:59.581: INFO: Pod "pod-43ce72c1-6962-4321-8a1d-df911fe6f9dc" satisfied condition "Succeeded or Failed" May 4 11:22:59.584: INFO: Trying to get logs from node kali-worker2 pod pod-43ce72c1-6962-4321-8a1d-df911fe6f9dc container test-container: STEP: delete the pod May 4 11:22:59.627: INFO: Waiting for pod pod-43ce72c1-6962-4321-8a1d-df911fe6f9dc to disappear May 4 11:22:59.642: INFO: Pod pod-43ce72c1-6962-4321-8a1d-df911fe6f9dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:22:59.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-567" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":689,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:22:59.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 4 11:22:59.787: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:22:59.804: INFO: Number of nodes with available pods: 0 May 4 11:22:59.804: INFO: Node kali-worker is running more than one daemon pod May 4 11:23:00.870: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:23:00.874: INFO: Number of nodes with available pods: 0 May 4 11:23:00.874: INFO: Node kali-worker is running more than one daemon pod May 4 11:23:01.810: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:23:01.814: INFO: Number of nodes with available pods: 0 May 4 11:23:01.814: INFO: Node kali-worker is running more than one daemon pod May 4 11:23:02.870: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:23:02.875: INFO: Number of nodes with available pods: 0 May 4 11:23:02.875: INFO: Node kali-worker is running more than one daemon pod May 4 11:23:03.822: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:23:03.825: INFO: Number of nodes with available pods: 1 May 4 11:23:03.825: INFO: Node kali-worker is running more than one daemon pod May 4 11:23:04.811: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:23:04.849: INFO: Number of nodes with available pods: 2 May 4 11:23:04.849: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 4 11:23:04.936: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:23:04.954: INFO: Number of nodes with available pods: 1 May 4 11:23:04.954: INFO: Node kali-worker2 is running more than one daemon pod May 4 11:23:05.959: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:23:05.963: INFO: Number of nodes with available pods: 1 May 4 11:23:05.963: INFO: Node kali-worker2 is running more than one daemon pod May 4 11:23:06.959: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:23:06.963: INFO: Number of nodes with available pods: 1 May 4 11:23:06.963: INFO: Node kali-worker2 is running more than one daemon pod May 4 11:23:07.960: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:23:07.964: INFO: Number of nodes with available pods: 1 May 4 11:23:07.964: INFO: Node kali-worker2 is running more than one daemon pod May 4 11:23:08.961: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:23:08.965: INFO: Number of nodes with available pods: 2 May 4 11:23:08.965: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4774, will wait for the garbage collector to delete the pods May 4 11:23:09.038: INFO: Deleting DaemonSet.extensions daemon-set took: 16.529768ms May 4 11:23:09.339: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.232149ms May 4 11:23:23.442: INFO: Number of nodes with available pods: 0 May 4 11:23:23.442: INFO: Number of running nodes: 0, number of available pods: 0 May 4 11:23:23.448: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4774/daemonsets","resourceVersion":"1420437"},"items":null} May 4 11:23:23.451: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4774/pods","resourceVersion":"1420437"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:23:23.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4774" for this suite. • [SLOW TEST:23.841 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":42,"skipped":700,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:23:23.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 4 11:23:23.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9184' May 4 11:23:23.657: INFO: stderr: "" May 4 11:23:23.657: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 May 4 11:23:23.677: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-9184' May 4 11:23:28.829: INFO: stderr: "" May 4 11:23:28.829: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:23:28.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9184" for this suite. • [SLOW TEST:5.355 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":43,"skipped":722,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:23:28.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-a6882816-530f-4cd0-8ac2-8842a70e672e in namespace container-probe-4412 May 4 11:23:32.983: INFO: Started pod busybox-a6882816-530f-4cd0-8ac2-8842a70e672e in namespace container-probe-4412 STEP: checking the pod's current state and verifying that restartCount is present May 4 11:23:32.987: INFO: Initial restart count of pod busybox-a6882816-530f-4cd0-8ac2-8842a70e672e is 0 May 4 11:24:21.148: INFO: Restart count of pod container-probe-4412/busybox-a6882816-530f-4cd0-8ac2-8842a70e672e is now 1 (48.161857415s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:24:21.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4412" for this suite. • [SLOW TEST:52.351 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":44,"skipped":730,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:24:21.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override arguments May 4 11:24:21.282: INFO: Waiting up to 5m0s for pod "client-containers-afd90561-2d82-42b4-9e98-780acfb91222" in namespace "containers-8477" to be "Succeeded or Failed" May 4 11:24:21.304: INFO: Pod "client-containers-afd90561-2d82-42b4-9e98-780acfb91222": Phase="Pending", Reason="", readiness=false. Elapsed: 21.876533ms May 4 11:24:23.307: INFO: Pod "client-containers-afd90561-2d82-42b4-9e98-780acfb91222": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025277991s May 4 11:24:25.312: INFO: Pod "client-containers-afd90561-2d82-42b4-9e98-780acfb91222": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029692064s STEP: Saw pod success May 4 11:24:25.312: INFO: Pod "client-containers-afd90561-2d82-42b4-9e98-780acfb91222" satisfied condition "Succeeded or Failed" May 4 11:24:25.315: INFO: Trying to get logs from node kali-worker pod client-containers-afd90561-2d82-42b4-9e98-780acfb91222 container test-container: STEP: delete the pod May 4 11:24:25.368: INFO: Waiting for pod client-containers-afd90561-2d82-42b4-9e98-780acfb91222 to disappear May 4 11:24:25.382: INFO: Pod client-containers-afd90561-2d82-42b4-9e98-780acfb91222 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:24:25.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8477" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":45,"skipped":793,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:24:25.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 4 11:24:25.568: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cc943a27-2a24-41e1-9880-65c3ed896bc9" in namespace "downward-api-8653" to be "Succeeded or Failed" May 4 11:24:25.586: INFO: Pod "downwardapi-volume-cc943a27-2a24-41e1-9880-65c3ed896bc9": Phase="Pending", Reason="", readiness=false. Elapsed: 17.989991ms May 4 11:24:27.591: INFO: Pod "downwardapi-volume-cc943a27-2a24-41e1-9880-65c3ed896bc9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022654526s May 4 11:24:29.596: INFO: Pod "downwardapi-volume-cc943a27-2a24-41e1-9880-65c3ed896bc9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027429907s STEP: Saw pod success May 4 11:24:29.596: INFO: Pod "downwardapi-volume-cc943a27-2a24-41e1-9880-65c3ed896bc9" satisfied condition "Succeeded or Failed" May 4 11:24:29.600: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-cc943a27-2a24-41e1-9880-65c3ed896bc9 container client-container: STEP: delete the pod May 4 11:24:29.647: INFO: Waiting for pod downwardapi-volume-cc943a27-2a24-41e1-9880-65c3ed896bc9 to disappear May 4 11:24:29.658: INFO: Pod downwardapi-volume-cc943a27-2a24-41e1-9880-65c3ed896bc9 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:24:29.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8653" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":46,"skipped":822,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:24:29.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:24:45.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1751" for this suite. • [SLOW TEST:16.215 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":47,"skipped":837,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:24:45.882: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token May 4 11:24:46.532: INFO: created pod pod-service-account-defaultsa May 4 11:24:46.532: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 4 11:24:46.562: INFO: created pod pod-service-account-mountsa May 4 11:24:46.562: INFO: pod pod-service-account-mountsa service account token volume mount: true May 4 11:24:46.582: INFO: created pod pod-service-account-nomountsa May 4 11:24:46.582: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 4 11:24:46.660: INFO: created pod pod-service-account-defaultsa-mountspec May 4 11:24:46.660: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 4 11:24:46.683: INFO: created pod pod-service-account-mountsa-mountspec May 4 11:24:46.683: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 4 11:24:46.719: INFO: created pod pod-service-account-nomountsa-mountspec May 4 11:24:46.719: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 4 11:24:46.786: INFO: created pod pod-service-account-defaultsa-nomountspec May 4 11:24:46.786: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 4 11:24:46.804: INFO: created pod pod-service-account-mountsa-nomountspec May 4 11:24:46.804: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 4 11:24:46.840: INFO: created pod pod-service-account-nomountsa-nomountspec May 4 11:24:46.840: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:24:46.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5746" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":275,"completed":48,"skipped":858,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:24:47.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:24:47.244: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 4 11:24:52.259: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 4 11:25:02.358: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 May 4 11:25:02.404: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-5881 /apis/apps/v1/namespaces/deployment-5881/deployments/test-cleanup-deployment 7cd38d88-3f05-44c6-b1f4-4ee4b835f1f3 1420967 1 2020-05-04 11:25:02 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-05-04 11:25:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035187a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 4 11:25:02.415: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f deployment-5881 /apis/apps/v1/namespaces/deployment-5881/replicasets/test-cleanup-deployment-b4867b47f fc39b7a3-3300-46fd-98fd-bd17b039920c 1420969 1 2020-05-04 11:25:02 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 7cd38d88-3f05-44c6-b1f4-4ee4b835f1f3 0xc001e33930 0xc001e33931}] [] [{kube-controller-manager Update apps/v1 2020-05-04 11:25:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 99 100 51 56 100 56 56 45 51 102 48 53 45 52 52 99 54 45 98 49 102 52 45 52 101 101 52 98 56 51 53 102 49 102 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001e33a48 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 4 11:25:02.415: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 4 11:25:02.415: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-5881 /apis/apps/v1/namespaces/deployment-5881/replicasets/test-cleanup-controller 549fdea8-9f96-427b-80bd-3a33864f9312 1420968 1 2020-05-04 11:24:47 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 7cd38d88-3f05-44c6-b1f4-4ee4b835f1f3 0xc001e337e7 0xc001e337e8}] [] [{e2e.test Update apps/v1 2020-05-04 11:24:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-04 11:25:02 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 99 100 51 56 100 56 56 45 51 102 48 53 45 52 52 99 54 45 98 49 102 52 45 52 101 101 52 98 56 51 53 102 49 102 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001e338a8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 4 11:25:02.471: INFO: Pod "test-cleanup-controller-7rdlr" is available: &Pod{ObjectMeta:{test-cleanup-controller-7rdlr test-cleanup-controller- deployment-5881 /api/v1/namespaces/deployment-5881/pods/test-cleanup-controller-7rdlr f8747fb0-98a7-49aa-8856-eac9c7291468 1420956 0 2020-05-04 11:24:47 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller 549fdea8-9f96-427b-80bd-3a33864f9312 0xc001c82377 0xc001c82378}] [] [{kube-controller-manager Update v1 2020-05-04 11:24:47 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 52 57 102 100 101 97 56 45 57 102 57 54 45 52 50 55 98 45 56 48 98 100 45 51 97 51 51 56 54 52 102 57 51 49 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 11:25:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 54 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2ddrv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2ddrv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2ddrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:24:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:25:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:25:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:24:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.63,StartTime:2020-05-04 11:24:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-04 11:24:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fdc1ceb09b549ec1e18ed0cd703bd24291d759221f18dd78d87c82da930bfd82,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.63,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 4 11:25:02.472: INFO: Pod "test-cleanup-deployment-b4867b47f-k6fr9" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-k6fr9 test-cleanup-deployment-b4867b47f- deployment-5881 /api/v1/namespaces/deployment-5881/pods/test-cleanup-deployment-b4867b47f-k6fr9 fe20290d-82ea-4f12-af92-47c0f5b3a766 1420975 0 2020-05-04 11:25:02 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f fc39b7a3-3300-46fd-98fd-bd17b039920c 0xc001c82850 0xc001c82851}] [] [{kube-controller-manager Update v1 2020-05-04 11:25:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 99 51 57 98 55 97 51 45 51 51 48 48 45 52 54 102 100 45 57 56 102 100 45 98 100 49 55 98 48 51 57 57 50 48 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2ddrv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2ddrv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2ddrv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:25:02 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:25:02.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5881" for this suite. • [SLOW TEST:15.509 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":49,"skipped":878,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:25:02.572: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-2319ad62-7cd7-4853-97ad-6d897fe3b7b9 STEP: Creating a pod to test consume configMaps May 4 11:25:02.698: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-53af15f5-8b7c-47eb-8b29-3cb153dbef90" in namespace "projected-3537" to be "Succeeded or Failed" May 4 11:25:02.723: INFO: Pod "pod-projected-configmaps-53af15f5-8b7c-47eb-8b29-3cb153dbef90": Phase="Pending", Reason="", readiness=false. Elapsed: 24.489372ms May 4 11:25:04.881: INFO: Pod "pod-projected-configmaps-53af15f5-8b7c-47eb-8b29-3cb153dbef90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183095696s May 4 11:25:06.943: INFO: Pod "pod-projected-configmaps-53af15f5-8b7c-47eb-8b29-3cb153dbef90": Phase="Pending", Reason="", readiness=false. Elapsed: 4.244603739s May 4 11:25:08.947: INFO: Pod "pod-projected-configmaps-53af15f5-8b7c-47eb-8b29-3cb153dbef90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.248253986s STEP: Saw pod success May 4 11:25:08.947: INFO: Pod "pod-projected-configmaps-53af15f5-8b7c-47eb-8b29-3cb153dbef90" satisfied condition "Succeeded or Failed" May 4 11:25:08.949: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-53af15f5-8b7c-47eb-8b29-3cb153dbef90 container projected-configmap-volume-test: STEP: delete the pod May 4 11:25:09.067: INFO: Waiting for pod pod-projected-configmaps-53af15f5-8b7c-47eb-8b29-3cb153dbef90 to disappear May 4 11:25:09.084: INFO: Pod pod-projected-configmaps-53af15f5-8b7c-47eb-8b29-3cb153dbef90 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:25:09.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3537" for this suite. • [SLOW TEST:6.538 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":50,"skipped":880,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:25:09.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 4 11:25:09.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9576 /api/v1/namespaces/watch-9576/configmaps/e2e-watch-test-resource-version eea806e1-375a-44d5-b060-130419a11dcc 1421057 0 2020-05-04 11:25:09 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-04 11:25:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} May 4 11:25:09.324: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9576 /api/v1/namespaces/watch-9576/configmaps/e2e-watch-test-resource-version eea806e1-375a-44d5-b060-130419a11dcc 1421058 0 2020-05-04 11:25:09 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-05-04 11:25:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:25:09.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9576" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":51,"skipped":891,"failed":0} ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:25:09.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs May 4 11:25:09.457: INFO: Waiting up to 5m0s for pod "pod-5b896ac4-bf80-4f28-ac13-0d7a82724684" in namespace "emptydir-9729" to be "Succeeded or Failed" May 4 11:25:09.480: INFO: Pod "pod-5b896ac4-bf80-4f28-ac13-0d7a82724684": Phase="Pending", Reason="", readiness=false. Elapsed: 22.879628ms May 4 11:25:11.486: INFO: Pod "pod-5b896ac4-bf80-4f28-ac13-0d7a82724684": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028514643s May 4 11:25:13.618: INFO: Pod "pod-5b896ac4-bf80-4f28-ac13-0d7a82724684": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.160885156s STEP: Saw pod success May 4 11:25:13.618: INFO: Pod "pod-5b896ac4-bf80-4f28-ac13-0d7a82724684" satisfied condition "Succeeded or Failed" May 4 11:25:13.621: INFO: Trying to get logs from node kali-worker2 pod pod-5b896ac4-bf80-4f28-ac13-0d7a82724684 container test-container: STEP: delete the pod May 4 11:25:13.758: INFO: Waiting for pod pod-5b896ac4-bf80-4f28-ac13-0d7a82724684 to disappear May 4 11:25:13.772: INFO: Pod pod-5b896ac4-bf80-4f28-ac13-0d7a82724684 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:25:13.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9729" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":52,"skipped":891,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:25:13.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC May 4 11:25:13.908: INFO: namespace kubectl-4342 May 4 11:25:13.908: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4342' May 4 11:25:14.169: INFO: stderr: "" May 4 11:25:14.169: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 4 11:25:15.173: INFO: Selector matched 1 pods for map[app:agnhost] May 4 11:25:15.173: INFO: Found 0 / 1 May 4 11:25:16.229: INFO: Selector matched 1 pods for map[app:agnhost] May 4 11:25:16.229: INFO: Found 0 / 1 May 4 11:25:17.193: INFO: Selector matched 1 pods for map[app:agnhost] May 4 11:25:17.193: INFO: Found 1 / 1 May 4 11:25:17.193: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 4 11:25:17.211: INFO: Selector matched 1 pods for map[app:agnhost] May 4 11:25:17.211: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 4 11:25:17.211: INFO: wait on agnhost-master startup in kubectl-4342 May 4 11:25:17.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs agnhost-master-wrkfb agnhost-master --namespace=kubectl-4342' May 4 11:25:17.326: INFO: stderr: "" May 4 11:25:17.326: INFO: stdout: "Paused\n" STEP: exposing RC May 4 11:25:17.326: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4342' May 4 11:25:17.532: INFO: stderr: "" May 4 11:25:17.532: INFO: stdout: "service/rm2 exposed\n" May 4 11:25:17.540: INFO: Service rm2 in namespace kubectl-4342 found. STEP: exposing service May 4 11:25:19.548: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4342' May 4 11:25:19.701: INFO: stderr: "" May 4 11:25:19.701: INFO: stdout: "service/rm3 exposed\n" May 4 11:25:19.709: INFO: Service rm3 in namespace kubectl-4342 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:25:21.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4342" for this suite. • [SLOW TEST:7.943 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":53,"skipped":903,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:25:21.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:25:21.818: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:25:22.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4481" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":275,"completed":54,"skipped":910,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:25:22.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium May 4 11:25:22.968: INFO: Waiting up to 5m0s for pod "pod-7d28ff37-1e40-42ca-8661-e35b2eaa87ed" in namespace "emptydir-5424" to be "Succeeded or Failed" May 4 11:25:22.995: INFO: Pod "pod-7d28ff37-1e40-42ca-8661-e35b2eaa87ed": Phase="Pending", Reason="", readiness=false. Elapsed: 26.842551ms May 4 11:25:24.999: INFO: Pod "pod-7d28ff37-1e40-42ca-8661-e35b2eaa87ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031171174s May 4 11:25:27.014: INFO: Pod "pod-7d28ff37-1e40-42ca-8661-e35b2eaa87ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045271776s STEP: Saw pod success May 4 11:25:27.014: INFO: Pod "pod-7d28ff37-1e40-42ca-8661-e35b2eaa87ed" satisfied condition "Succeeded or Failed" May 4 11:25:27.017: INFO: Trying to get logs from node kali-worker2 pod pod-7d28ff37-1e40-42ca-8661-e35b2eaa87ed container test-container: STEP: delete the pod May 4 11:25:27.057: INFO: Waiting for pod pod-7d28ff37-1e40-42ca-8661-e35b2eaa87ed to disappear May 4 11:25:27.073: INFO: Pod pod-7d28ff37-1e40-42ca-8661-e35b2eaa87ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:25:27.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5424" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":912,"failed":0} SSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:25:27.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-7bb67adf-06a6-4e5c-8015-b0fd0e6a0917 May 4 11:25:27.546: INFO: Pod name my-hostname-basic-7bb67adf-06a6-4e5c-8015-b0fd0e6a0917: Found 0 pods out of 1 May 4 11:25:32.577: INFO: Pod name my-hostname-basic-7bb67adf-06a6-4e5c-8015-b0fd0e6a0917: Found 1 pods out of 1 May 4 11:25:32.577: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-7bb67adf-06a6-4e5c-8015-b0fd0e6a0917" are running May 4 11:25:32.580: INFO: Pod "my-hostname-basic-7bb67adf-06a6-4e5c-8015-b0fd0e6a0917-x5xnh" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-04 11:25:27 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-04 11:25:30 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-04 11:25:30 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-04 11:25:27 +0000 UTC Reason: Message:}]) May 4 11:25:32.580: INFO: Trying to dial the pod May 4 11:25:37.592: INFO: Controller my-hostname-basic-7bb67adf-06a6-4e5c-8015-b0fd0e6a0917: Got expected result from replica 1 [my-hostname-basic-7bb67adf-06a6-4e5c-8015-b0fd0e6a0917-x5xnh]: "my-hostname-basic-7bb67adf-06a6-4e5c-8015-b0fd0e6a0917-x5xnh", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:25:37.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5263" for this suite. • [SLOW TEST:10.423 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":56,"skipped":918,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:25:37.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 4 11:25:37.700: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c66cbe7e-95b4-4705-b5d0-895ffd2e5e6a" in namespace "projected-2198" to be "Succeeded or Failed" May 4 11:25:37.714: INFO: Pod "downwardapi-volume-c66cbe7e-95b4-4705-b5d0-895ffd2e5e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.252139ms May 4 11:25:39.718: INFO: Pod "downwardapi-volume-c66cbe7e-95b4-4705-b5d0-895ffd2e5e6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017935432s May 4 11:25:41.722: INFO: Pod "downwardapi-volume-c66cbe7e-95b4-4705-b5d0-895ffd2e5e6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022197298s STEP: Saw pod success May 4 11:25:41.722: INFO: Pod "downwardapi-volume-c66cbe7e-95b4-4705-b5d0-895ffd2e5e6a" satisfied condition "Succeeded or Failed" May 4 11:25:41.725: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-c66cbe7e-95b4-4705-b5d0-895ffd2e5e6a container client-container: STEP: delete the pod May 4 11:25:41.847: INFO: Waiting for pod downwardapi-volume-c66cbe7e-95b4-4705-b5d0-895ffd2e5e6a to disappear May 4 11:25:41.860: INFO: Pod downwardapi-volume-c66cbe7e-95b4-4705-b5d0-895ffd2e5e6a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:25:41.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2198" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":928,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:25:41.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:25:58.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2003" for this suite. • [SLOW TEST:16.556 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":58,"skipped":930,"failed":0} SSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:25:58.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-6983/configmap-test-80c7597a-04a1-45b3-a51f-13930bd5d2e8 STEP: Creating a pod to test consume configMaps May 4 11:25:58.647: INFO: Waiting up to 5m0s for pod "pod-configmaps-bb15620d-3260-4f2d-bd1d-106451912d89" in namespace "configmap-6983" to be "Succeeded or Failed" May 4 11:25:58.703: INFO: Pod "pod-configmaps-bb15620d-3260-4f2d-bd1d-106451912d89": Phase="Pending", Reason="", readiness=false. Elapsed: 55.720446ms May 4 11:26:00.707: INFO: Pod "pod-configmaps-bb15620d-3260-4f2d-bd1d-106451912d89": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059724269s May 4 11:26:02.711: INFO: Pod "pod-configmaps-bb15620d-3260-4f2d-bd1d-106451912d89": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063580593s STEP: Saw pod success May 4 11:26:02.711: INFO: Pod "pod-configmaps-bb15620d-3260-4f2d-bd1d-106451912d89" satisfied condition "Succeeded or Failed" May 4 11:26:02.713: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-bb15620d-3260-4f2d-bd1d-106451912d89 container env-test: STEP: delete the pod May 4 11:26:02.805: INFO: Waiting for pod pod-configmaps-bb15620d-3260-4f2d-bd1d-106451912d89 to disappear May 4 11:26:02.812: INFO: Pod pod-configmaps-bb15620d-3260-4f2d-bd1d-106451912d89 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:26:02.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6983" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":938,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:26:02.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:26:02.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1509" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":60,"skipped":969,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:26:02.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 4 11:26:11.096: INFO: 0 pods remaining May 4 11:26:11.096: INFO: 0 pods has nil DeletionTimestamp May 4 11:26:11.096: INFO: May 4 11:26:11.930: INFO: 0 pods remaining May 4 11:26:11.930: INFO: 0 pods has nil DeletionTimestamp May 4 11:26:11.930: INFO: May 4 11:26:12.786: INFO: 0 pods remaining May 4 11:26:12.786: INFO: 0 pods has nil DeletionTimestamp May 4 11:26:12.786: INFO: STEP: Gathering metrics W0504 11:26:13.814424 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 4 11:26:13.814: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:26:13.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3756" for this suite. • [SLOW TEST:10.871 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":61,"skipped":997,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:26:13.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 4 11:26:14.393: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-5612' May 4 11:26:14.606: INFO: stderr: "" May 4 11:26:14.606: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 4 11:26:19.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-5612 -o json' May 4 11:26:19.781: INFO: stderr: "" May 4 11:26:19.781: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-04T11:26:14Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"managedFields\": [\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:metadata\": {\n \"f:labels\": {\n \".\": {},\n \"f:run\": {}\n }\n },\n \"f:spec\": {\n \"f:containers\": {\n \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n \".\": {},\n \"f:image\": {},\n \"f:imagePullPolicy\": {},\n \"f:name\": {},\n \"f:resources\": {},\n \"f:terminationMessagePath\": {},\n \"f:terminationMessagePolicy\": {}\n }\n },\n \"f:dnsPolicy\": {},\n \"f:enableServiceLinks\": {},\n \"f:restartPolicy\": {},\n \"f:schedulerName\": {},\n \"f:securityContext\": {},\n \"f:terminationGracePeriodSeconds\": {}\n }\n },\n \"manager\": \"kubectl\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-04T11:26:14Z\"\n },\n {\n \"apiVersion\": \"v1\",\n \"fieldsType\": \"FieldsV1\",\n \"fieldsV1\": {\n \"f:status\": {\n \"f:conditions\": {\n \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n },\n \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n \".\": {},\n \"f:lastProbeTime\": {},\n \"f:lastTransitionTime\": {},\n \"f:status\": {},\n \"f:type\": {}\n }\n },\n \"f:containerStatuses\": {},\n \"f:hostIP\": {},\n \"f:phase\": {},\n \"f:podIP\": {},\n \"f:podIPs\": {\n \".\": {},\n \"k:{\\\"ip\\\":\\\"10.244.2.125\\\"}\": {\n \".\": {},\n \"f:ip\": {}\n }\n },\n \"f:startTime\": {}\n }\n },\n \"manager\": \"kubelet\",\n \"operation\": \"Update\",\n \"time\": \"2020-05-04T11:26:18Z\"\n }\n ],\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-5612\",\n \"resourceVersion\": \"1421658\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5612/pods/e2e-test-httpd-pod\",\n \"uid\": \"f1df25ee-0bdf-4234-a23d-70858f76d3e6\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-vdrf9\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"kali-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-vdrf9\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-vdrf9\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-04T11:26:14Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-04T11:26:18Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-04T11:26:18Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-04T11:26:14Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://b4c22d64f37410904d975c681959a65eb0a7a27d27e3cd31f6ca6a8e495855dd\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-04T11:26:17Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.15\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.125\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.125\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-04T11:26:14Z\"\n }\n}\n" STEP: replace the image in the pod May 4 11:26:19.781: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5612' May 4 11:26:20.180: INFO: stderr: "" May 4 11:26:20.180: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459 May 4 11:26:20.192: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-5612' May 4 11:26:33.737: INFO: stderr: "" May 4 11:26:33.737: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:26:33.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5612" for this suite. • [SLOW TEST:19.904 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":275,"completed":62,"skipped":1012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:26:33.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 4 11:26:37.888: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:26:38.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-395" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":63,"skipped":1080,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:26:38.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 4 11:26:38.511: INFO: Waiting up to 5m0s for pod "downwardapi-volume-611f5f20-e9a4-4e7b-889c-e0ac08f955e6" in namespace "downward-api-2894" to be "Succeeded or Failed" May 4 11:26:38.525: INFO: Pod "downwardapi-volume-611f5f20-e9a4-4e7b-889c-e0ac08f955e6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.41525ms May 4 11:26:40.530: INFO: Pod "downwardapi-volume-611f5f20-e9a4-4e7b-889c-e0ac08f955e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018678319s May 4 11:26:42.535: INFO: Pod "downwardapi-volume-611f5f20-e9a4-4e7b-889c-e0ac08f955e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023803011s STEP: Saw pod success May 4 11:26:42.535: INFO: Pod "downwardapi-volume-611f5f20-e9a4-4e7b-889c-e0ac08f955e6" satisfied condition "Succeeded or Failed" May 4 11:26:42.538: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-611f5f20-e9a4-4e7b-889c-e0ac08f955e6 container client-container: STEP: delete the pod May 4 11:26:42.588: INFO: Waiting for pod downwardapi-volume-611f5f20-e9a4-4e7b-889c-e0ac08f955e6 to disappear May 4 11:26:42.610: INFO: Pod downwardapi-volume-611f5f20-e9a4-4e7b-889c-e0ac08f955e6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:26:42.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2894" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":64,"skipped":1081,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:26:42.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 4 11:26:42.813: INFO: Waiting up to 5m0s for pod "downwardapi-volume-55745802-3d04-407e-bb13-509d8174bc30" in namespace "downward-api-7435" to be "Succeeded or Failed" May 4 11:26:42.815: INFO: Pod "downwardapi-volume-55745802-3d04-407e-bb13-509d8174bc30": Phase="Pending", Reason="", readiness=false. Elapsed: 1.854929ms May 4 11:26:44.836: INFO: Pod "downwardapi-volume-55745802-3d04-407e-bb13-509d8174bc30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023213388s May 4 11:26:46.863: INFO: Pod "downwardapi-volume-55745802-3d04-407e-bb13-509d8174bc30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050328611s STEP: Saw pod success May 4 11:26:46.863: INFO: Pod "downwardapi-volume-55745802-3d04-407e-bb13-509d8174bc30" satisfied condition "Succeeded or Failed" May 4 11:26:46.866: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-55745802-3d04-407e-bb13-509d8174bc30 container client-container: STEP: delete the pod May 4 11:26:47.025: INFO: Waiting for pod downwardapi-volume-55745802-3d04-407e-bb13-509d8174bc30 to disappear May 4 11:26:47.036: INFO: Pod downwardapi-volume-55745802-3d04-407e-bb13-509d8174bc30 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:26:47.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7435" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":1084,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:26:47.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-1265 STEP: creating a selector STEP: Creating the service pods in kubernetes May 4 11:26:47.133: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable May 4 11:26:47.187: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 11:26:49.192: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 11:26:51.192: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) May 4 11:26:53.192: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 11:26:55.193: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 11:26:57.192: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 11:26:59.192: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 11:27:01.192: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 11:27:03.192: INFO: The status of Pod netserver-0 is Running (Ready = false) May 4 11:27:05.192: INFO: The status of Pod netserver-0 is Running (Ready = true) May 4 11:27:05.199: INFO: The status of Pod netserver-1 is Running (Ready = false) May 4 11:27:07.203: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods May 4 11:27:11.275: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.75:8080/dial?request=hostname&protocol=udp&host=10.244.2.129&port=8081&tries=1'] Namespace:pod-network-test-1265 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 11:27:11.275: INFO: >>> kubeConfig: /root/.kube/config I0504 11:27:11.315568 7 log.go:172] (0xc0024782c0) (0xc001e16d20) Create stream I0504 11:27:11.315604 7 log.go:172] (0xc0024782c0) (0xc001e16d20) Stream added, broadcasting: 1 I0504 11:27:11.318169 7 log.go:172] (0xc0024782c0) Reply frame received for 1 I0504 11:27:11.318215 7 log.go:172] (0xc0024782c0) (0xc0025645a0) Create stream I0504 11:27:11.318227 7 log.go:172] (0xc0024782c0) (0xc0025645a0) Stream added, broadcasting: 3 I0504 11:27:11.319332 7 log.go:172] (0xc0024782c0) Reply frame received for 3 I0504 11:27:11.319388 7 log.go:172] (0xc0024782c0) (0xc002564640) Create stream I0504 11:27:11.319406 7 log.go:172] (0xc0024782c0) (0xc002564640) Stream added, broadcasting: 5 I0504 11:27:11.320217 7 log.go:172] (0xc0024782c0) Reply frame received for 5 I0504 11:27:11.408162 7 log.go:172] (0xc0024782c0) Data frame received for 3 I0504 11:27:11.408207 7 log.go:172] (0xc0025645a0) (3) Data frame handling I0504 11:27:11.408241 7 log.go:172] (0xc0025645a0) (3) Data frame sent I0504 11:27:11.408577 7 log.go:172] (0xc0024782c0) Data frame received for 3 I0504 11:27:11.408602 7 log.go:172] (0xc0025645a0) (3) Data frame handling I0504 11:27:11.408631 7 log.go:172] (0xc0024782c0) Data frame received for 5 I0504 11:27:11.408647 7 log.go:172] (0xc002564640) (5) Data frame handling I0504 11:27:11.410338 7 log.go:172] (0xc0024782c0) Data frame received for 1 I0504 11:27:11.410368 7 log.go:172] (0xc001e16d20) (1) Data frame handling I0504 11:27:11.410390 7 log.go:172] (0xc001e16d20) (1) Data frame sent I0504 11:27:11.410406 7 log.go:172] (0xc0024782c0) (0xc001e16d20) Stream removed, broadcasting: 1 I0504 11:27:11.410426 7 log.go:172] (0xc0024782c0) Go away received I0504 11:27:11.410519 7 log.go:172] (0xc0024782c0) (0xc001e16d20) Stream removed, broadcasting: 1 I0504 11:27:11.410534 7 log.go:172] (0xc0024782c0) (0xc0025645a0) Stream removed, broadcasting: 3 I0504 11:27:11.410539 7 log.go:172] (0xc0024782c0) (0xc002564640) Stream removed, broadcasting: 5 May 4 11:27:11.410: INFO: Waiting for responses: map[] May 4 11:27:11.413: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.75:8080/dial?request=hostname&protocol=udp&host=10.244.1.74&port=8081&tries=1'] Namespace:pod-network-test-1265 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 11:27:11.413: INFO: >>> kubeConfig: /root/.kube/config I0504 11:27:11.445409 7 log.go:172] (0xc001e60580) (0xc00258e3c0) Create stream I0504 11:27:11.445435 7 log.go:172] (0xc001e60580) (0xc00258e3c0) Stream added, broadcasting: 1 I0504 11:27:11.447991 7 log.go:172] (0xc001e60580) Reply frame received for 1 I0504 11:27:11.448035 7 log.go:172] (0xc001e60580) (0xc001e16f00) Create stream I0504 11:27:11.448045 7 log.go:172] (0xc001e60580) (0xc001e16f00) Stream added, broadcasting: 3 I0504 11:27:11.449339 7 log.go:172] (0xc001e60580) Reply frame received for 3 I0504 11:27:11.449366 7 log.go:172] (0xc001e60580) (0xc001f4c000) Create stream I0504 11:27:11.449382 7 log.go:172] (0xc001e60580) (0xc001f4c000) Stream added, broadcasting: 5 I0504 11:27:11.450460 7 log.go:172] (0xc001e60580) Reply frame received for 5 I0504 11:27:11.522391 7 log.go:172] (0xc001e60580) Data frame received for 3 I0504 11:27:11.522425 7 log.go:172] (0xc001e16f00) (3) Data frame handling I0504 11:27:11.522443 7 log.go:172] (0xc001e16f00) (3) Data frame sent I0504 11:27:11.522595 7 log.go:172] (0xc001e60580) Data frame received for 3 I0504 11:27:11.522613 7 log.go:172] (0xc001e16f00) (3) Data frame handling I0504 11:27:11.522955 7 log.go:172] (0xc001e60580) Data frame received for 5 I0504 11:27:11.522986 7 log.go:172] (0xc001f4c000) (5) Data frame handling I0504 11:27:11.524634 7 log.go:172] (0xc001e60580) Data frame received for 1 I0504 11:27:11.524650 7 log.go:172] (0xc00258e3c0) (1) Data frame handling I0504 11:27:11.524668 7 log.go:172] (0xc00258e3c0) (1) Data frame sent I0504 11:27:11.524700 7 log.go:172] (0xc001e60580) (0xc00258e3c0) Stream removed, broadcasting: 1 I0504 11:27:11.524752 7 log.go:172] (0xc001e60580) Go away received I0504 11:27:11.524806 7 log.go:172] (0xc001e60580) (0xc00258e3c0) Stream removed, broadcasting: 1 I0504 11:27:11.524867 7 log.go:172] (0xc001e60580) (0xc001e16f00) Stream removed, broadcasting: 3 I0504 11:27:11.524887 7 log.go:172] (0xc001e60580) (0xc001f4c000) Stream removed, broadcasting: 5 May 4 11:27:11.524: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:27:11.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1265" for this suite. • [SLOW TEST:24.487 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":1152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:27:11.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 4 11:27:11.651: INFO: Waiting up to 5m0s for pod "downward-api-6a0dd8a1-5813-46b0-84c1-c4c9c4682435" in namespace "downward-api-5149" to be "Succeeded or Failed" May 4 11:27:11.671: INFO: Pod "downward-api-6a0dd8a1-5813-46b0-84c1-c4c9c4682435": Phase="Pending", Reason="", readiness=false. Elapsed: 20.418205ms May 4 11:27:13.675: INFO: Pod "downward-api-6a0dd8a1-5813-46b0-84c1-c4c9c4682435": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024764694s May 4 11:27:15.679: INFO: Pod "downward-api-6a0dd8a1-5813-46b0-84c1-c4c9c4682435": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028683367s STEP: Saw pod success May 4 11:27:15.679: INFO: Pod "downward-api-6a0dd8a1-5813-46b0-84c1-c4c9c4682435" satisfied condition "Succeeded or Failed" May 4 11:27:15.682: INFO: Trying to get logs from node kali-worker pod downward-api-6a0dd8a1-5813-46b0-84c1-c4c9c4682435 container dapi-container: STEP: delete the pod May 4 11:27:15.968: INFO: Waiting for pod downward-api-6a0dd8a1-5813-46b0-84c1-c4c9c4682435 to disappear May 4 11:27:15.975: INFO: Pod downward-api-6a0dd8a1-5813-46b0-84c1-c4c9c4682435 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:27:15.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5149" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":67,"skipped":1210,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:27:15.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:27:36.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5779" for this suite. STEP: Destroying namespace "nsdeletetest-2467" for this suite. May 4 11:27:36.419: INFO: Namespace nsdeletetest-2467 was already deleted STEP: Destroying namespace "nsdeletetest-6126" for this suite. • [SLOW TEST:20.439 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":68,"skipped":1246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:27:36.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container May 4 11:27:41.071: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7796 pod-service-account-05d3f5a2-853b-447a-b6f9-24ffa38764a8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 4 11:27:41.323: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7796 pod-service-account-05d3f5a2-853b-447a-b6f9-24ffa38764a8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 4 11:27:41.531: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-7796 pod-service-account-05d3f5a2-853b-447a-b6f9-24ffa38764a8 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:27:41.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7796" for this suite. • [SLOW TEST:5.313 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":69,"skipped":1297,"failed":0} S ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:27:41.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:27:41.962: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 4 11:27:44.110: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:27:45.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5450" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":70,"skipped":1298,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:27:45.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod May 4 11:27:52.166: INFO: Successfully updated pod "labelsupdate37595dba-f057-4a41-87e8-e07a5a14c791" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:27:54.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6723" for this suite. • [SLOW TEST:8.988 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1302,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:27:54.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium May 4 11:27:54.394: INFO: Waiting up to 5m0s for pod "pod-e68c5a3d-649c-4d7b-ab8b-3dc7488234a4" in namespace "emptydir-7901" to be "Succeeded or Failed" May 4 11:27:54.420: INFO: Pod "pod-e68c5a3d-649c-4d7b-ab8b-3dc7488234a4": Phase="Pending", Reason="", readiness=false. Elapsed: 25.58696ms May 4 11:27:56.424: INFO: Pod "pod-e68c5a3d-649c-4d7b-ab8b-3dc7488234a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030208479s May 4 11:27:58.428: INFO: Pod "pod-e68c5a3d-649c-4d7b-ab8b-3dc7488234a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034175951s STEP: Saw pod success May 4 11:27:58.428: INFO: Pod "pod-e68c5a3d-649c-4d7b-ab8b-3dc7488234a4" satisfied condition "Succeeded or Failed" May 4 11:27:58.430: INFO: Trying to get logs from node kali-worker pod pod-e68c5a3d-649c-4d7b-ab8b-3dc7488234a4 container test-container: STEP: delete the pod May 4 11:27:58.446: INFO: Waiting for pod pod-e68c5a3d-649c-4d7b-ab8b-3dc7488234a4 to disappear May 4 11:27:58.463: INFO: Pod pod-e68c5a3d-649c-4d7b-ab8b-3dc7488234a4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:27:58.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7901" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":72,"skipped":1305,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:27:58.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-7ac81206-c3ed-459a-9251-dd608569b9d5 STEP: Creating a pod to test consume secrets May 4 11:27:58.597: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ab5575ea-b6b2-4625-a0b3-c1102420b1f8" in namespace "projected-5033" to be "Succeeded or Failed" May 4 11:27:58.612: INFO: Pod "pod-projected-secrets-ab5575ea-b6b2-4625-a0b3-c1102420b1f8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.313865ms May 4 11:28:00.616: INFO: Pod "pod-projected-secrets-ab5575ea-b6b2-4625-a0b3-c1102420b1f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019174279s May 4 11:28:02.631: INFO: Pod "pod-projected-secrets-ab5575ea-b6b2-4625-a0b3-c1102420b1f8": Phase="Running", Reason="", readiness=true. Elapsed: 4.034391623s May 4 11:28:04.634: INFO: Pod "pod-projected-secrets-ab5575ea-b6b2-4625-a0b3-c1102420b1f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.037222449s STEP: Saw pod success May 4 11:28:04.634: INFO: Pod "pod-projected-secrets-ab5575ea-b6b2-4625-a0b3-c1102420b1f8" satisfied condition "Succeeded or Failed" May 4 11:28:04.636: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-ab5575ea-b6b2-4625-a0b3-c1102420b1f8 container projected-secret-volume-test: STEP: delete the pod May 4 11:28:04.664: INFO: Waiting for pod pod-projected-secrets-ab5575ea-b6b2-4625-a0b3-c1102420b1f8 to disappear May 4 11:28:04.702: INFO: Pod pod-projected-secrets-ab5575ea-b6b2-4625-a0b3-c1102420b1f8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:28:04.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5033" for this suite. • [SLOW TEST:6.237 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1386,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:28:04.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 4 11:28:08.864: INFO: &Pod{ObjectMeta:{send-events-4a2f045c-5b06-441c-a889-35151e581f0b events-5235 /api/v1/namespaces/events-5235/pods/send-events-4a2f045c-5b06-441c-a889-35151e581f0b 19c1f7ad-ae63-444b-bf3d-c50e0718bf66 1422416 0 2020-05-04 11:28:04 +0000 UTC map[name:foo time:787924673] map[] [] [] [{e2e.test Update v1 2020-05-04 11:28:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 11:28:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 55 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-67wg4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-67wg4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-67wg4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:28:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:28:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:28:08 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:28:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.78,StartTime:2020-05-04 11:28:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-04 11:28:07 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://32b9630f79cac7df4aefc73b12ce084c568e878d4126518e60d7b2b63bbd664a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.78,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 4 11:28:10.869: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 4 11:28:12.875: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:28:12.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-5235" for this suite. • [SLOW TEST:8.201 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":275,"completed":74,"skipped":1393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:28:12.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:28:48.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-79" for this suite. • [SLOW TEST:35.829 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1419,"failed":0} SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:28:48.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-stnm STEP: Creating a pod to test atomic-volume-subpath May 4 11:28:48.808: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-stnm" in namespace "subpath-8582" to be "Succeeded or Failed" May 4 11:28:48.813: INFO: Pod "pod-subpath-test-configmap-stnm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.602137ms May 4 11:28:50.839: INFO: Pod "pod-subpath-test-configmap-stnm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030736553s May 4 11:28:52.843: INFO: Pod "pod-subpath-test-configmap-stnm": Phase="Running", Reason="", readiness=true. Elapsed: 4.034602611s May 4 11:28:54.847: INFO: Pod "pod-subpath-test-configmap-stnm": Phase="Running", Reason="", readiness=true. Elapsed: 6.039092372s May 4 11:28:56.852: INFO: Pod "pod-subpath-test-configmap-stnm": Phase="Running", Reason="", readiness=true. Elapsed: 8.04333993s May 4 11:28:58.856: INFO: Pod "pod-subpath-test-configmap-stnm": Phase="Running", Reason="", readiness=true. Elapsed: 10.04804215s May 4 11:29:00.860: INFO: Pod "pod-subpath-test-configmap-stnm": Phase="Running", Reason="", readiness=true. Elapsed: 12.051759843s May 4 11:29:02.865: INFO: Pod "pod-subpath-test-configmap-stnm": Phase="Running", Reason="", readiness=true. Elapsed: 14.056213779s May 4 11:29:04.869: INFO: Pod "pod-subpath-test-configmap-stnm": Phase="Running", Reason="", readiness=true. Elapsed: 16.060662291s May 4 11:29:06.874: INFO: Pod "pod-subpath-test-configmap-stnm": Phase="Running", Reason="", readiness=true. Elapsed: 18.065363314s May 4 11:29:08.878: INFO: Pod "pod-subpath-test-configmap-stnm": Phase="Running", Reason="", readiness=true. Elapsed: 20.069134778s May 4 11:29:10.882: INFO: Pod "pod-subpath-test-configmap-stnm": Phase="Running", Reason="", readiness=true. Elapsed: 22.073352949s May 4 11:29:12.887: INFO: Pod "pod-subpath-test-configmap-stnm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.078218773s STEP: Saw pod success May 4 11:29:12.887: INFO: Pod "pod-subpath-test-configmap-stnm" satisfied condition "Succeeded or Failed" May 4 11:29:12.890: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-stnm container test-container-subpath-configmap-stnm: STEP: delete the pod May 4 11:29:13.049: INFO: Waiting for pod pod-subpath-test-configmap-stnm to disappear May 4 11:29:13.102: INFO: Pod pod-subpath-test-configmap-stnm no longer exists STEP: Deleting pod pod-subpath-test-configmap-stnm May 4 11:29:13.102: INFO: Deleting pod "pod-subpath-test-configmap-stnm" in namespace "subpath-8582" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:29:13.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8582" for this suite. • [SLOW TEST:24.374 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":76,"skipped":1424,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:29:13.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 4 11:29:13.348: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:29:13.382: INFO: Number of nodes with available pods: 0 May 4 11:29:13.382: INFO: Node kali-worker is running more than one daemon pod May 4 11:29:14.387: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:29:14.391: INFO: Number of nodes with available pods: 0 May 4 11:29:14.391: INFO: Node kali-worker is running more than one daemon pod May 4 11:29:15.386: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:29:15.390: INFO: Number of nodes with available pods: 0 May 4 11:29:15.390: INFO: Node kali-worker is running more than one daemon pod May 4 11:29:16.453: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:29:16.491: INFO: Number of nodes with available pods: 0 May 4 11:29:16.491: INFO: Node kali-worker is running more than one daemon pod May 4 11:29:17.386: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:29:17.390: INFO: Number of nodes with available pods: 2 May 4 11:29:17.390: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 4 11:29:17.500: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:29:17.503: INFO: Number of nodes with available pods: 1 May 4 11:29:17.503: INFO: Node kali-worker is running more than one daemon pod May 4 11:29:20.882: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:29:21.439: INFO: Number of nodes with available pods: 1 May 4 11:29:21.439: INFO: Node kali-worker is running more than one daemon pod May 4 11:29:21.557: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:29:21.710: INFO: Number of nodes with available pods: 1 May 4 11:29:21.710: INFO: Node kali-worker is running more than one daemon pod May 4 11:29:22.621: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:29:22.625: INFO: Number of nodes with available pods: 1 May 4 11:29:22.625: INFO: Node kali-worker is running more than one daemon pod May 4 11:29:23.508: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:29:23.512: INFO: Number of nodes with available pods: 1 May 4 11:29:23.512: INFO: Node kali-worker is running more than one daemon pod May 4 11:29:24.509: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:29:24.512: INFO: Number of nodes with available pods: 1 May 4 11:29:24.512: INFO: Node kali-worker is running more than one daemon pod May 4 11:29:25.509: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:29:25.514: INFO: Number of nodes with available pods: 1 May 4 11:29:25.514: INFO: Node kali-worker is running more than one daemon pod May 4 11:29:26.509: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:29:26.513: INFO: Number of nodes with available pods: 1 May 4 11:29:26.513: INFO: Node kali-worker is running more than one daemon pod May 4 11:29:27.508: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 4 11:29:27.513: INFO: Number of nodes with available pods: 2 May 4 11:29:27.513: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5775, will wait for the garbage collector to delete the pods May 4 11:29:27.575: INFO: Deleting DaemonSet.extensions daemon-set took: 6.463494ms May 4 11:29:27.975: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.273605ms May 4 11:29:35.039: INFO: Number of nodes with available pods: 0 May 4 11:29:35.039: INFO: Number of running nodes: 0, number of available pods: 0 May 4 11:29:35.042: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5775/daemonsets","resourceVersion":"1422838"},"items":null} May 4 11:29:35.045: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5775/pods","resourceVersion":"1422838"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:29:36.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5775" for this suite. • [SLOW TEST:23.708 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":77,"skipped":1445,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:29:36.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 4 11:29:36.954: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 4 11:29:37.013: INFO: Waiting for terminating namespaces to be deleted... May 4 11:29:37.016: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 4 11:29:37.062: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:29:37.062: INFO: Container kindnet-cni ready: true, restart count 1 May 4 11:29:37.062: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:29:37.062: INFO: Container kube-proxy ready: true, restart count 0 May 4 11:29:37.062: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 4 11:29:37.068: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:29:37.068: INFO: Container kindnet-cni ready: true, restart count 0 May 4 11:29:37.068: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:29:37.068: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160bcfcdf6f15729], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:29:38.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9644" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":78,"skipped":1456,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:29:38.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:29:38.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3875" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":79,"skipped":1496,"failed":0} ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:29:38.202: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service endpoint-test2 in namespace services-2297 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2297 to expose endpoints map[] May 4 11:29:38.313: INFO: Get endpoints failed (12.87205ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 4 11:29:39.316: INFO: successfully validated that service endpoint-test2 in namespace services-2297 exposes endpoints map[] (1.016083825s elapsed) STEP: Creating pod pod1 in namespace services-2297 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2297 to expose endpoints map[pod1:[80]] May 4 11:29:44.446: INFO: successfully validated that service endpoint-test2 in namespace services-2297 exposes endpoints map[pod1:[80]] (5.12276507s elapsed) STEP: Creating pod pod2 in namespace services-2297 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2297 to expose endpoints map[pod1:[80] pod2:[80]] May 4 11:29:49.949: INFO: successfully validated that service endpoint-test2 in namespace services-2297 exposes endpoints map[pod1:[80] pod2:[80]] (5.480755419s elapsed) STEP: Deleting pod pod1 in namespace services-2297 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2297 to expose endpoints map[pod2:[80]] May 4 11:29:51.069: INFO: successfully validated that service endpoint-test2 in namespace services-2297 exposes endpoints map[pod2:[80]] (1.114929827s elapsed) STEP: Deleting pod pod2 in namespace services-2297 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-2297 to expose endpoints map[] May 4 11:29:52.087: INFO: successfully validated that service endpoint-test2 in namespace services-2297 exposes endpoints map[] (1.013335736s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:29:52.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2297" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:13.921 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":275,"completed":80,"skipped":1496,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:29:52.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:29:52.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5723" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":81,"skipped":1511,"failed":0} SSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:29:52.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:29:52.288: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2796 I0504 11:29:52.307689 7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2796, replica count: 1 I0504 11:29:53.358137 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 11:29:54.358378 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0504 11:29:55.358622 7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 4 11:29:55.488: INFO: Created: latency-svc-jdshm May 4 11:29:55.504: INFO: Got endpoints: latency-svc-jdshm [45.654664ms] May 4 11:29:55.536: INFO: Created: latency-svc-x4cmn May 4 11:29:55.545: INFO: Got endpoints: latency-svc-x4cmn [40.797889ms] May 4 11:29:55.602: INFO: Created: latency-svc-pglsl May 4 11:29:55.620: INFO: Got endpoints: latency-svc-pglsl [115.74513ms] May 4 11:29:55.649: INFO: Created: latency-svc-f6jcm May 4 11:29:55.666: INFO: Got endpoints: latency-svc-f6jcm [161.195339ms] May 4 11:29:55.685: INFO: Created: latency-svc-7k8nv May 4 11:29:55.698: INFO: Got endpoints: latency-svc-7k8nv [193.547071ms] May 4 11:29:55.758: INFO: Created: latency-svc-ks986 May 4 11:29:55.768: INFO: Got endpoints: latency-svc-ks986 [263.074505ms] May 4 11:29:55.794: INFO: Created: latency-svc-7p7sc May 4 11:29:55.818: INFO: Got endpoints: latency-svc-7p7sc [313.738438ms] May 4 11:29:55.854: INFO: Created: latency-svc-96ng7 May 4 11:29:55.902: INFO: Got endpoints: latency-svc-96ng7 [397.864463ms] May 4 11:29:55.914: INFO: Created: latency-svc-v4cjt May 4 11:29:55.932: INFO: Got endpoints: latency-svc-v4cjt [427.79201ms] May 4 11:29:55.986: INFO: Created: latency-svc-bhwxx May 4 11:29:55.999: INFO: Got endpoints: latency-svc-bhwxx [494.767852ms] May 4 11:29:56.051: INFO: Created: latency-svc-k5sx5 May 4 11:29:56.056: INFO: Got endpoints: latency-svc-k5sx5 [551.108253ms] May 4 11:29:56.081: INFO: Created: latency-svc-vcjqj May 4 11:29:56.112: INFO: Got endpoints: latency-svc-vcjqj [607.292583ms] May 4 11:29:56.150: INFO: Created: latency-svc-4bkzz May 4 11:29:56.189: INFO: Got endpoints: latency-svc-4bkzz [684.557198ms] May 4 11:29:56.207: INFO: Created: latency-svc-tv5z6 May 4 11:29:56.226: INFO: Got endpoints: latency-svc-tv5z6 [721.761208ms] May 4 11:29:56.249: INFO: Created: latency-svc-qkwsn May 4 11:29:56.262: INFO: Got endpoints: latency-svc-qkwsn [758.211417ms] May 4 11:29:56.285: INFO: Created: latency-svc-jmb9v May 4 11:29:56.369: INFO: Got endpoints: latency-svc-jmb9v [864.917609ms] May 4 11:29:56.373: INFO: Created: latency-svc-8zm9g May 4 11:29:56.394: INFO: Got endpoints: latency-svc-8zm9g [848.42424ms] May 4 11:29:56.423: INFO: Created: latency-svc-2wxnb May 4 11:29:56.438: INFO: Got endpoints: latency-svc-2wxnb [818.00563ms] May 4 11:29:56.522: INFO: Created: latency-svc-qt7g5 May 4 11:29:56.523: INFO: Got endpoints: latency-svc-qt7g5 [856.933114ms] May 4 11:29:56.615: INFO: Created: latency-svc-67vks May 4 11:29:56.681: INFO: Got endpoints: latency-svc-67vks [982.535742ms] May 4 11:29:56.683: INFO: Created: latency-svc-tqk7q May 4 11:29:56.697: INFO: Got endpoints: latency-svc-tqk7q [929.535867ms] May 4 11:29:56.735: INFO: Created: latency-svc-74d45 May 4 11:29:56.746: INFO: Got endpoints: latency-svc-74d45 [927.500522ms] May 4 11:29:56.818: INFO: Created: latency-svc-sb4wj May 4 11:29:56.861: INFO: Got endpoints: latency-svc-sb4wj [959.175246ms] May 4 11:29:56.899: INFO: Created: latency-svc-prf2r May 4 11:29:56.913: INFO: Got endpoints: latency-svc-prf2r [980.69238ms] May 4 11:29:56.985: INFO: Created: latency-svc-thhcb May 4 11:29:56.994: INFO: Got endpoints: latency-svc-thhcb [994.767764ms] May 4 11:29:57.047: INFO: Created: latency-svc-86jk5 May 4 11:29:57.071: INFO: Got endpoints: latency-svc-86jk5 [1.015380752s] May 4 11:29:57.149: INFO: Created: latency-svc-vlgrw May 4 11:29:57.191: INFO: Got endpoints: latency-svc-vlgrw [1.079312735s] May 4 11:29:57.227: INFO: Created: latency-svc-27gtl May 4 11:29:57.234: INFO: Got endpoints: latency-svc-27gtl [1.045498897s] May 4 11:29:57.286: INFO: Created: latency-svc-sc45k May 4 11:29:57.329: INFO: Got endpoints: latency-svc-sc45k [1.102656179s] May 4 11:29:57.330: INFO: Created: latency-svc-lq4vk May 4 11:29:57.359: INFO: Got endpoints: latency-svc-lq4vk [1.096564458s] May 4 11:29:57.440: INFO: Created: latency-svc-ft2vj May 4 11:29:57.461: INFO: Got endpoints: latency-svc-ft2vj [1.092339819s] May 4 11:29:57.496: INFO: Created: latency-svc-w5kb4 May 4 11:29:57.506: INFO: Got endpoints: latency-svc-w5kb4 [1.112372581s] May 4 11:29:57.539: INFO: Created: latency-svc-l6s56 May 4 11:29:57.584: INFO: Got endpoints: latency-svc-l6s56 [1.145998335s] May 4 11:29:57.611: INFO: Created: latency-svc-qrkrm May 4 11:29:57.627: INFO: Got endpoints: latency-svc-qrkrm [1.104170302s] May 4 11:29:57.677: INFO: Created: latency-svc-chjl2 May 4 11:29:57.722: INFO: Got endpoints: latency-svc-chjl2 [1.041227641s] May 4 11:29:57.767: INFO: Created: latency-svc-jr85m May 4 11:29:57.778: INFO: Got endpoints: latency-svc-jr85m [1.081257033s] May 4 11:29:57.808: INFO: Created: latency-svc-dcmv6 May 4 11:29:57.854: INFO: Got endpoints: latency-svc-dcmv6 [1.10819169s] May 4 11:29:57.880: INFO: Created: latency-svc-tb69m May 4 11:29:57.910: INFO: Got endpoints: latency-svc-tb69m [1.049182332s] May 4 11:29:57.947: INFO: Created: latency-svc-sgg2k May 4 11:29:57.998: INFO: Got endpoints: latency-svc-sgg2k [1.085103123s] May 4 11:29:58.049: INFO: Created: latency-svc-m7dz8 May 4 11:29:58.061: INFO: Got endpoints: latency-svc-m7dz8 [1.066979089s] May 4 11:29:58.175: INFO: Created: latency-svc-55c25 May 4 11:29:58.182: INFO: Got endpoints: latency-svc-55c25 [1.110347629s] May 4 11:29:58.210: INFO: Created: latency-svc-zq8mx May 4 11:29:58.228: INFO: Got endpoints: latency-svc-zq8mx [1.037095539s] May 4 11:29:58.291: INFO: Created: latency-svc-4ms8m May 4 11:29:58.296: INFO: Got endpoints: latency-svc-4ms8m [1.061989917s] May 4 11:29:58.318: INFO: Created: latency-svc-h2hgd May 4 11:29:58.332: INFO: Got endpoints: latency-svc-h2hgd [1.003447646s] May 4 11:29:58.391: INFO: Created: latency-svc-kbkd9 May 4 11:29:58.435: INFO: Got endpoints: latency-svc-kbkd9 [1.075786888s] May 4 11:29:58.463: INFO: Created: latency-svc-d46c7 May 4 11:29:58.493: INFO: Got endpoints: latency-svc-d46c7 [1.031035634s] May 4 11:29:58.535: INFO: Created: latency-svc-vp6xz May 4 11:29:58.608: INFO: Got endpoints: latency-svc-vp6xz [1.102311148s] May 4 11:29:58.660: INFO: Created: latency-svc-cn2jm May 4 11:29:58.669: INFO: Got endpoints: latency-svc-cn2jm [1.085170234s] May 4 11:29:58.752: INFO: Created: latency-svc-s4fbb May 4 11:29:58.756: INFO: Got endpoints: latency-svc-s4fbb [1.129165353s] May 4 11:29:58.786: INFO: Created: latency-svc-r4cfc May 4 11:29:58.803: INFO: Got endpoints: latency-svc-r4cfc [1.080469768s] May 4 11:29:58.828: INFO: Created: latency-svc-8vxbf May 4 11:29:58.851: INFO: Got endpoints: latency-svc-8vxbf [1.072190229s] May 4 11:29:58.984: INFO: Created: latency-svc-mccb5 May 4 11:29:59.001: INFO: Got endpoints: latency-svc-mccb5 [1.14696111s] May 4 11:29:59.141: INFO: Created: latency-svc-j2q4f May 4 11:29:59.183: INFO: Got endpoints: latency-svc-j2q4f [1.272200261s] May 4 11:29:59.183: INFO: Created: latency-svc-krjwf May 4 11:29:59.212: INFO: Got endpoints: latency-svc-krjwf [1.21350202s] May 4 11:29:59.303: INFO: Created: latency-svc-b67n6 May 4 11:29:59.307: INFO: Got endpoints: latency-svc-b67n6 [1.246210923s] May 4 11:29:59.387: INFO: Created: latency-svc-8zzd4 May 4 11:29:59.400: INFO: Got endpoints: latency-svc-8zzd4 [1.218505837s] May 4 11:29:59.470: INFO: Created: latency-svc-xxf6r May 4 11:29:59.482: INFO: Got endpoints: latency-svc-xxf6r [1.254132364s] May 4 11:29:59.518: INFO: Created: latency-svc-hd44f May 4 11:29:59.537: INFO: Got endpoints: latency-svc-hd44f [1.240528152s] May 4 11:29:59.608: INFO: Created: latency-svc-rnqj8 May 4 11:29:59.629: INFO: Got endpoints: latency-svc-rnqj8 [1.296875462s] May 4 11:29:59.680: INFO: Created: latency-svc-4hg9l May 4 11:29:59.722: INFO: Got endpoints: latency-svc-4hg9l [1.287206299s] May 4 11:29:59.782: INFO: Created: latency-svc-b9g77 May 4 11:29:59.796: INFO: Got endpoints: latency-svc-b9g77 [1.303151204s] May 4 11:29:59.884: INFO: Created: latency-svc-4g5nl May 4 11:29:59.898: INFO: Got endpoints: latency-svc-4g5nl [1.289706724s] May 4 11:29:59.950: INFO: Created: latency-svc-fz4t8 May 4 11:29:59.959: INFO: Got endpoints: latency-svc-fz4t8 [1.289490881s] May 4 11:30:00.046: INFO: Created: latency-svc-xk2ht May 4 11:30:00.073: INFO: Got endpoints: latency-svc-xk2ht [1.317443161s] May 4 11:30:00.141: INFO: Created: latency-svc-5mcv6 May 4 11:30:00.160: INFO: Got endpoints: latency-svc-5mcv6 [1.357183489s] May 4 11:30:00.202: INFO: Created: latency-svc-n9kkv May 4 11:30:00.211: INFO: Got endpoints: latency-svc-n9kkv [1.360413244s] May 4 11:30:00.280: INFO: Created: latency-svc-lmbmz May 4 11:30:00.284: INFO: Got endpoints: latency-svc-lmbmz [1.283013234s] May 4 11:30:00.309: INFO: Created: latency-svc-8hpbb May 4 11:30:00.326: INFO: Got endpoints: latency-svc-8hpbb [1.143677381s] May 4 11:30:00.370: INFO: Created: latency-svc-54z26 May 4 11:30:00.411: INFO: Got endpoints: latency-svc-54z26 [1.199268953s] May 4 11:30:00.423: INFO: Created: latency-svc-nmc4h May 4 11:30:00.441: INFO: Got endpoints: latency-svc-nmc4h [1.133650511s] May 4 11:30:00.472: INFO: Created: latency-svc-bfbtk May 4 11:30:00.502: INFO: Got endpoints: latency-svc-bfbtk [1.101566479s] May 4 11:30:00.579: INFO: Created: latency-svc-kj6d2 May 4 11:30:00.592: INFO: Got endpoints: latency-svc-kj6d2 [1.109108792s] May 4 11:30:00.628: INFO: Created: latency-svc-kmdhf May 4 11:30:00.634: INFO: Got endpoints: latency-svc-kmdhf [1.096900045s] May 4 11:30:00.664: INFO: Created: latency-svc-bkhqt May 4 11:30:00.743: INFO: Got endpoints: latency-svc-bkhqt [1.113498194s] May 4 11:30:00.813: INFO: Created: latency-svc-nk5qg May 4 11:30:00.814: INFO: Created: latency-svc-x2q6d May 4 11:30:00.898: INFO: Got endpoints: latency-svc-nk5qg [1.175557289s] May 4 11:30:00.898: INFO: Got endpoints: latency-svc-x2q6d [1.102404782s] May 4 11:30:00.946: INFO: Created: latency-svc-56w8g May 4 11:30:00.971: INFO: Got endpoints: latency-svc-56w8g [1.072865846s] May 4 11:30:01.066: INFO: Created: latency-svc-s9987 May 4 11:30:01.141: INFO: Got endpoints: latency-svc-s9987 [1.182511633s] May 4 11:30:01.215: INFO: Created: latency-svc-phq4q May 4 11:30:01.304: INFO: Got endpoints: latency-svc-phq4q [1.230684767s] May 4 11:30:01.322: INFO: Created: latency-svc-ldxr6 May 4 11:30:01.366: INFO: Got endpoints: latency-svc-ldxr6 [1.20560719s] May 4 11:30:01.420: INFO: Created: latency-svc-pqd2n May 4 11:30:01.436: INFO: Got endpoints: latency-svc-pqd2n [1.224518042s] May 4 11:30:01.456: INFO: Created: latency-svc-krp8h May 4 11:30:01.470: INFO: Got endpoints: latency-svc-krp8h [1.186403884s] May 4 11:30:01.491: INFO: Created: latency-svc-xcnvt May 4 11:30:01.537: INFO: Got endpoints: latency-svc-xcnvt [1.210902231s] May 4 11:30:01.557: INFO: Created: latency-svc-sx42k May 4 11:30:01.568: INFO: Got endpoints: latency-svc-sx42k [1.156778287s] May 4 11:30:01.588: INFO: Created: latency-svc-fcf7p May 4 11:30:01.618: INFO: Got endpoints: latency-svc-fcf7p [1.176983795s] May 4 11:30:01.675: INFO: Created: latency-svc-b7zq8 May 4 11:30:01.678: INFO: Got endpoints: latency-svc-b7zq8 [1.176521144s] May 4 11:30:01.731: INFO: Created: latency-svc-cl4rf May 4 11:30:01.743: INFO: Got endpoints: latency-svc-cl4rf [1.150861041s] May 4 11:30:01.768: INFO: Created: latency-svc-ghxgt May 4 11:30:01.824: INFO: Got endpoints: latency-svc-ghxgt [1.189801275s] May 4 11:30:01.840: INFO: Created: latency-svc-jwd8k May 4 11:30:01.858: INFO: Got endpoints: latency-svc-jwd8k [1.115368779s] May 4 11:30:01.888: INFO: Created: latency-svc-gm4n9 May 4 11:30:01.900: INFO: Got endpoints: latency-svc-gm4n9 [1.002557912s] May 4 11:30:01.968: INFO: Created: latency-svc-sbgms May 4 11:30:01.983: INFO: Got endpoints: latency-svc-sbgms [1.084035766s] May 4 11:30:02.019: INFO: Created: latency-svc-s6tqc May 4 11:30:02.043: INFO: Got endpoints: latency-svc-s6tqc [1.072289279s] May 4 11:30:02.061: INFO: Created: latency-svc-jxkfc May 4 11:30:02.129: INFO: Got endpoints: latency-svc-jxkfc [987.853859ms] May 4 11:30:02.145: INFO: Created: latency-svc-5vz7f May 4 11:30:02.160: INFO: Got endpoints: latency-svc-5vz7f [855.948018ms] May 4 11:30:02.187: INFO: Created: latency-svc-9p5qw May 4 11:30:02.205: INFO: Got endpoints: latency-svc-9p5qw [839.619257ms] May 4 11:30:02.279: INFO: Created: latency-svc-brxp5 May 4 11:30:02.306: INFO: Got endpoints: latency-svc-brxp5 [870.578728ms] May 4 11:30:02.307: INFO: Created: latency-svc-mff7f May 4 11:30:02.337: INFO: Got endpoints: latency-svc-mff7f [866.423486ms] May 4 11:30:02.367: INFO: Created: latency-svc-vz8hn May 4 11:30:02.428: INFO: Got endpoints: latency-svc-vz8hn [890.904538ms] May 4 11:30:02.457: INFO: Created: latency-svc-2wnkq May 4 11:30:02.475: INFO: Got endpoints: latency-svc-2wnkq [906.684285ms] May 4 11:30:02.499: INFO: Created: latency-svc-sj54g May 4 11:30:02.517: INFO: Got endpoints: latency-svc-sj54g [899.203561ms] May 4 11:30:02.610: INFO: Created: latency-svc-q48xl May 4 11:30:02.661: INFO: Got endpoints: latency-svc-q48xl [983.045603ms] May 4 11:30:02.661: INFO: Created: latency-svc-8wldp May 4 11:30:02.686: INFO: Got endpoints: latency-svc-8wldp [943.431735ms] May 4 11:30:02.753: INFO: Created: latency-svc-f9hxf May 4 11:30:02.764: INFO: Got endpoints: latency-svc-f9hxf [940.305008ms] May 4 11:30:02.792: INFO: Created: latency-svc-fp4f8 May 4 11:30:02.806: INFO: Got endpoints: latency-svc-fp4f8 [948.031738ms] May 4 11:30:02.840: INFO: Created: latency-svc-pmxdw May 4 11:30:02.914: INFO: Got endpoints: latency-svc-pmxdw [1.013511821s] May 4 11:30:02.943: INFO: Created: latency-svc-cqw7z May 4 11:30:02.973: INFO: Got endpoints: latency-svc-cqw7z [990.511524ms] May 4 11:30:03.064: INFO: Created: latency-svc-dnjqw May 4 11:30:03.067: INFO: Got endpoints: latency-svc-dnjqw [1.023718315s] May 4 11:30:03.111: INFO: Created: latency-svc-9dg85 May 4 11:30:03.126: INFO: Got endpoints: latency-svc-9dg85 [996.538305ms] May 4 11:30:03.152: INFO: Created: latency-svc-nswdg May 4 11:30:03.215: INFO: Got endpoints: latency-svc-nswdg [1.054331724s] May 4 11:30:03.243: INFO: Created: latency-svc-6mst4 May 4 11:30:03.258: INFO: Got endpoints: latency-svc-6mst4 [1.052643445s] May 4 11:30:03.297: INFO: Created: latency-svc-vjr8l May 4 11:30:03.345: INFO: Got endpoints: latency-svc-vjr8l [1.03887727s] May 4 11:30:03.364: INFO: Created: latency-svc-m2r9w May 4 11:30:03.377: INFO: Got endpoints: latency-svc-m2r9w [1.040278903s] May 4 11:30:03.404: INFO: Created: latency-svc-8k2pk May 4 11:30:03.429: INFO: Got endpoints: latency-svc-8k2pk [1.001029074s] May 4 11:30:03.513: INFO: Created: latency-svc-54lgq May 4 11:30:03.540: INFO: Got endpoints: latency-svc-54lgq [1.06568957s] May 4 11:30:03.566: INFO: Created: latency-svc-jpghr May 4 11:30:03.650: INFO: Got endpoints: latency-svc-jpghr [1.132916678s] May 4 11:30:03.694: INFO: Created: latency-svc-twxpl May 4 11:30:03.734: INFO: Got endpoints: latency-svc-twxpl [1.072678603s] May 4 11:30:03.752: INFO: Created: latency-svc-4xqft May 4 11:30:03.769: INFO: Got endpoints: latency-svc-4xqft [1.083247331s] May 4 11:30:03.807: INFO: Created: latency-svc-x55bk May 4 11:30:03.875: INFO: Got endpoints: latency-svc-x55bk [1.110432865s] May 4 11:30:03.902: INFO: Created: latency-svc-7ntr7 May 4 11:30:03.920: INFO: Got endpoints: latency-svc-7ntr7 [1.113433754s] May 4 11:30:03.944: INFO: Created: latency-svc-tgmfm May 4 11:30:03.956: INFO: Got endpoints: latency-svc-tgmfm [1.042064049s] May 4 11:30:04.016: INFO: Created: latency-svc-frm5x May 4 11:30:04.040: INFO: Got endpoints: latency-svc-frm5x [1.066931481s] May 4 11:30:04.076: INFO: Created: latency-svc-bdxbn May 4 11:30:04.095: INFO: Got endpoints: latency-svc-bdxbn [1.027958418s] May 4 11:30:04.153: INFO: Created: latency-svc-qmlph May 4 11:30:04.178: INFO: Got endpoints: latency-svc-qmlph [1.051921198s] May 4 11:30:04.178: INFO: Created: latency-svc-mq8kf May 4 11:30:04.202: INFO: Got endpoints: latency-svc-mq8kf [987.173832ms] May 4 11:30:04.232: INFO: Created: latency-svc-nrfwt May 4 11:30:04.246: INFO: Got endpoints: latency-svc-nrfwt [988.366696ms] May 4 11:30:04.303: INFO: Created: latency-svc-44s26 May 4 11:30:04.306: INFO: Got endpoints: latency-svc-44s26 [960.922755ms] May 4 11:30:04.334: INFO: Created: latency-svc-9xf6h May 4 11:30:04.349: INFO: Got endpoints: latency-svc-9xf6h [971.251834ms] May 4 11:30:04.370: INFO: Created: latency-svc-q9b4m May 4 11:30:04.387: INFO: Got endpoints: latency-svc-q9b4m [957.37107ms] May 4 11:30:04.446: INFO: Created: latency-svc-tbvmb May 4 11:30:04.449: INFO: Got endpoints: latency-svc-tbvmb [909.193267ms] May 4 11:30:04.478: INFO: Created: latency-svc-cs8v8 May 4 11:30:04.496: INFO: Got endpoints: latency-svc-cs8v8 [846.335031ms] May 4 11:30:04.520: INFO: Created: latency-svc-rjmvn May 4 11:30:04.620: INFO: Got endpoints: latency-svc-rjmvn [886.33791ms] May 4 11:30:04.628: INFO: Created: latency-svc-dg46k May 4 11:30:04.647: INFO: Got endpoints: latency-svc-dg46k [878.120816ms] May 4 11:30:04.682: INFO: Created: latency-svc-gtb9s May 4 11:30:04.696: INFO: Got endpoints: latency-svc-gtb9s [821.040253ms] May 4 11:30:04.764: INFO: Created: latency-svc-rt79b May 4 11:30:04.780: INFO: Got endpoints: latency-svc-rt79b [860.347344ms] May 4 11:30:04.814: INFO: Created: latency-svc-6f8l7 May 4 11:30:04.844: INFO: Got endpoints: latency-svc-6f8l7 [887.669562ms] May 4 11:30:04.898: INFO: Created: latency-svc-2k8zz May 4 11:30:04.913: INFO: Got endpoints: latency-svc-2k8zz [872.715779ms] May 4 11:30:04.945: INFO: Created: latency-svc-8klfp May 4 11:30:04.973: INFO: Got endpoints: latency-svc-8klfp [878.21166ms] May 4 11:30:05.027: INFO: Created: latency-svc-hrg58 May 4 11:30:05.034: INFO: Got endpoints: latency-svc-hrg58 [855.745373ms] May 4 11:30:05.078: INFO: Created: latency-svc-wr2cp May 4 11:30:05.088: INFO: Got endpoints: latency-svc-wr2cp [886.19952ms] May 4 11:30:05.114: INFO: Created: latency-svc-qrhgf May 4 11:30:05.177: INFO: Got endpoints: latency-svc-qrhgf [930.406344ms] May 4 11:30:05.191: INFO: Created: latency-svc-s2mbx May 4 11:30:05.222: INFO: Got endpoints: latency-svc-s2mbx [915.722114ms] May 4 11:30:05.258: INFO: Created: latency-svc-v6468 May 4 11:30:05.308: INFO: Got endpoints: latency-svc-v6468 [959.507297ms] May 4 11:30:05.366: INFO: Created: latency-svc-mks8z May 4 11:30:05.396: INFO: Got endpoints: latency-svc-mks8z [173.725923ms] May 4 11:30:05.452: INFO: Created: latency-svc-t4ngj May 4 11:30:05.462: INFO: Got endpoints: latency-svc-t4ngj [1.075192242s] May 4 11:30:05.485: INFO: Created: latency-svc-vjcq9 May 4 11:30:05.505: INFO: Got endpoints: latency-svc-vjcq9 [1.055025449s] May 4 11:30:05.528: INFO: Created: latency-svc-dvzqg May 4 11:30:05.541: INFO: Got endpoints: latency-svc-dvzqg [1.044715941s] May 4 11:30:05.590: INFO: Created: latency-svc-nx7kz May 4 11:30:05.624: INFO: Got endpoints: latency-svc-nx7kz [1.003373892s] May 4 11:30:05.626: INFO: Created: latency-svc-749vm May 4 11:30:05.653: INFO: Got endpoints: latency-svc-749vm [1.00582924s] May 4 11:30:05.683: INFO: Created: latency-svc-dgfrb May 4 11:30:05.746: INFO: Got endpoints: latency-svc-dgfrb [1.050034586s] May 4 11:30:05.768: INFO: Created: latency-svc-kp8q8 May 4 11:30:05.777: INFO: Got endpoints: latency-svc-kp8q8 [996.441764ms] May 4 11:30:05.805: INFO: Created: latency-svc-lbtvd May 4 11:30:05.822: INFO: Got endpoints: latency-svc-lbtvd [977.842995ms] May 4 11:30:05.845: INFO: Created: latency-svc-2cdm4 May 4 11:30:05.884: INFO: Got endpoints: latency-svc-2cdm4 [970.67747ms] May 4 11:30:05.899: INFO: Created: latency-svc-q5mml May 4 11:30:05.913: INFO: Got endpoints: latency-svc-q5mml [939.516017ms] May 4 11:30:05.936: INFO: Created: latency-svc-t2vhd May 4 11:30:05.954: INFO: Got endpoints: latency-svc-t2vhd [920.923125ms] May 4 11:30:05.978: INFO: Created: latency-svc-st47w May 4 11:30:06.022: INFO: Got endpoints: latency-svc-st47w [933.533749ms] May 4 11:30:06.032: INFO: Created: latency-svc-kggbq May 4 11:30:06.061: INFO: Got endpoints: latency-svc-kggbq [884.658099ms] May 4 11:30:06.097: INFO: Created: latency-svc-qx44c May 4 11:30:06.115: INFO: Got endpoints: latency-svc-qx44c [806.571982ms] May 4 11:30:06.176: INFO: Created: latency-svc-4w5ft May 4 11:30:06.200: INFO: Got endpoints: latency-svc-4w5ft [803.72836ms] May 4 11:30:06.235: INFO: Created: latency-svc-jg4nc May 4 11:30:06.251: INFO: Got endpoints: latency-svc-jg4nc [788.465333ms] May 4 11:30:06.309: INFO: Created: latency-svc-54lsb May 4 11:30:06.312: INFO: Got endpoints: latency-svc-54lsb [807.75952ms] May 4 11:30:06.343: INFO: Created: latency-svc-ddwmc May 4 11:30:06.360: INFO: Got endpoints: latency-svc-ddwmc [818.304591ms] May 4 11:30:06.385: INFO: Created: latency-svc-5lnsr May 4 11:30:06.403: INFO: Got endpoints: latency-svc-5lnsr [779.177775ms] May 4 11:30:06.465: INFO: Created: latency-svc-45qh2 May 4 11:30:06.468: INFO: Got endpoints: latency-svc-45qh2 [814.342507ms] May 4 11:30:06.511: INFO: Created: latency-svc-sxtbb May 4 11:30:06.528: INFO: Got endpoints: latency-svc-sxtbb [782.314876ms] May 4 11:30:06.608: INFO: Created: latency-svc-zgsv5 May 4 11:30:06.643: INFO: Got endpoints: latency-svc-zgsv5 [865.870901ms] May 4 11:30:06.643: INFO: Created: latency-svc-9xvxv May 4 11:30:06.662: INFO: Got endpoints: latency-svc-9xvxv [839.923368ms] May 4 11:30:06.765: INFO: Created: latency-svc-w8h7g May 4 11:30:06.799: INFO: Got endpoints: latency-svc-w8h7g [914.889816ms] May 4 11:30:06.914: INFO: Created: latency-svc-qqr4z May 4 11:30:06.919: INFO: Got endpoints: latency-svc-qqr4z [1.006325247s] May 4 11:30:07.009: INFO: Created: latency-svc-4857n May 4 11:30:07.081: INFO: Got endpoints: latency-svc-4857n [1.12667941s] May 4 11:30:07.098: INFO: Created: latency-svc-7f4f5 May 4 11:30:07.110: INFO: Got endpoints: latency-svc-7f4f5 [1.088315726s] May 4 11:30:07.171: INFO: Created: latency-svc-8wj9x May 4 11:30:07.179: INFO: Got endpoints: latency-svc-8wj9x [1.117291543s] May 4 11:30:07.261: INFO: Created: latency-svc-6slh5 May 4 11:30:07.279: INFO: Got endpoints: latency-svc-6slh5 [1.163895895s] May 4 11:30:07.376: INFO: Created: latency-svc-gqrhs May 4 11:30:07.435: INFO: Got endpoints: latency-svc-gqrhs [1.235332836s] May 4 11:30:07.439: INFO: Created: latency-svc-tdbjq May 4 11:30:07.513: INFO: Got endpoints: latency-svc-tdbjq [1.262398344s] May 4 11:30:07.543: INFO: Created: latency-svc-btp5t May 4 11:30:07.562: INFO: Got endpoints: latency-svc-btp5t [1.25001408s] May 4 11:30:07.656: INFO: Created: latency-svc-w8brj May 4 11:30:07.659: INFO: Got endpoints: latency-svc-w8brj [1.29963357s] May 4 11:30:07.687: INFO: Created: latency-svc-lqmxl May 4 11:30:07.702: INFO: Got endpoints: latency-svc-lqmxl [1.298366772s] May 4 11:30:07.723: INFO: Created: latency-svc-w2x24 May 4 11:30:07.738: INFO: Got endpoints: latency-svc-w2x24 [1.269687649s] May 4 11:30:07.794: INFO: Created: latency-svc-r5n7l May 4 11:30:07.819: INFO: Created: latency-svc-mw8zk May 4 11:30:07.819: INFO: Got endpoints: latency-svc-r5n7l [1.290913328s] May 4 11:30:07.849: INFO: Got endpoints: latency-svc-mw8zk [1.2063249s] May 4 11:30:07.885: INFO: Created: latency-svc-4fwks May 4 11:30:07.937: INFO: Got endpoints: latency-svc-4fwks [1.275745025s] May 4 11:30:07.963: INFO: Created: latency-svc-wdr9h May 4 11:30:07.980: INFO: Got endpoints: latency-svc-wdr9h [1.181457532s] May 4 11:30:08.006: INFO: Created: latency-svc-hz4rs May 4 11:30:08.022: INFO: Got endpoints: latency-svc-hz4rs [1.102561736s] May 4 11:30:08.076: INFO: Created: latency-svc-ngf8m May 4 11:30:08.080: INFO: Got endpoints: latency-svc-ngf8m [998.231043ms] May 4 11:30:08.121: INFO: Created: latency-svc-9cfn6 May 4 11:30:08.137: INFO: Got endpoints: latency-svc-9cfn6 [1.026499617s] May 4 11:30:08.160: INFO: Created: latency-svc-rw68l May 4 11:30:08.172: INFO: Got endpoints: latency-svc-rw68l [993.631542ms] May 4 11:30:08.256: INFO: Created: latency-svc-zlb2h May 4 11:30:08.263: INFO: Got endpoints: latency-svc-zlb2h [984.41623ms] May 4 11:30:08.323: INFO: Created: latency-svc-flc26 May 4 11:30:08.375: INFO: Got endpoints: latency-svc-flc26 [939.575077ms] May 4 11:30:08.425: INFO: Created: latency-svc-6n752 May 4 11:30:08.446: INFO: Got endpoints: latency-svc-6n752 [932.816478ms] May 4 11:30:08.519: INFO: Created: latency-svc-npqxg May 4 11:30:08.563: INFO: Got endpoints: latency-svc-npqxg [1.000284673s] May 4 11:30:08.563: INFO: Created: latency-svc-qgw4q May 4 11:30:08.582: INFO: Got endpoints: latency-svc-qgw4q [923.184627ms] May 4 11:30:08.694: INFO: Created: latency-svc-fdp9d May 4 11:30:08.713: INFO: Got endpoints: latency-svc-fdp9d [1.010888768s] May 4 11:30:08.731: INFO: Created: latency-svc-fvfx2 May 4 11:30:08.752: INFO: Got endpoints: latency-svc-fvfx2 [1.01439115s] May 4 11:30:08.791: INFO: Created: latency-svc-mdjp6 May 4 11:30:08.848: INFO: Got endpoints: latency-svc-mdjp6 [1.028625283s] May 4 11:30:08.875: INFO: Created: latency-svc-gfv9h May 4 11:30:08.903: INFO: Got endpoints: latency-svc-gfv9h [1.054197478s] May 4 11:30:08.934: INFO: Created: latency-svc-k66mk May 4 11:30:08.987: INFO: Got endpoints: latency-svc-k66mk [1.049901971s] May 4 11:30:09.036: INFO: Created: latency-svc-zjz5s May 4 11:30:09.054: INFO: Got endpoints: latency-svc-zjz5s [1.074030246s] May 4 11:30:09.078: INFO: Created: latency-svc-zz56d May 4 11:30:09.135: INFO: Got endpoints: latency-svc-zz56d [1.11329481s] May 4 11:30:09.181: INFO: Created: latency-svc-g56pc May 4 11:30:09.211: INFO: Got endpoints: latency-svc-g56pc [1.131431396s] May 4 11:30:09.273: INFO: Created: latency-svc-cd254 May 4 11:30:09.276: INFO: Got endpoints: latency-svc-cd254 [1.139400953s] May 4 11:30:09.306: INFO: Created: latency-svc-wdknm May 4 11:30:09.323: INFO: Got endpoints: latency-svc-wdknm [1.15093867s] May 4 11:30:09.324: INFO: Latencies: [40.797889ms 115.74513ms 161.195339ms 173.725923ms 193.547071ms 263.074505ms 313.738438ms 397.864463ms 427.79201ms 494.767852ms 551.108253ms 607.292583ms 684.557198ms 721.761208ms 758.211417ms 779.177775ms 782.314876ms 788.465333ms 803.72836ms 806.571982ms 807.75952ms 814.342507ms 818.00563ms 818.304591ms 821.040253ms 839.619257ms 839.923368ms 846.335031ms 848.42424ms 855.745373ms 855.948018ms 856.933114ms 860.347344ms 864.917609ms 865.870901ms 866.423486ms 870.578728ms 872.715779ms 878.120816ms 878.21166ms 884.658099ms 886.19952ms 886.33791ms 887.669562ms 890.904538ms 899.203561ms 906.684285ms 909.193267ms 914.889816ms 915.722114ms 920.923125ms 923.184627ms 927.500522ms 929.535867ms 930.406344ms 932.816478ms 933.533749ms 939.516017ms 939.575077ms 940.305008ms 943.431735ms 948.031738ms 957.37107ms 959.175246ms 959.507297ms 960.922755ms 970.67747ms 971.251834ms 977.842995ms 980.69238ms 982.535742ms 983.045603ms 984.41623ms 987.173832ms 987.853859ms 988.366696ms 990.511524ms 993.631542ms 994.767764ms 996.441764ms 996.538305ms 998.231043ms 1.000284673s 1.001029074s 1.002557912s 1.003373892s 1.003447646s 1.00582924s 1.006325247s 1.010888768s 1.013511821s 1.01439115s 1.015380752s 1.023718315s 1.026499617s 1.027958418s 1.028625283s 1.031035634s 1.037095539s 1.03887727s 1.040278903s 1.041227641s 1.042064049s 1.044715941s 1.045498897s 1.049182332s 1.049901971s 1.050034586s 1.051921198s 1.052643445s 1.054197478s 1.054331724s 1.055025449s 1.061989917s 1.06568957s 1.066931481s 1.066979089s 1.072190229s 1.072289279s 1.072678603s 1.072865846s 1.074030246s 1.075192242s 1.075786888s 1.079312735s 1.080469768s 1.081257033s 1.083247331s 1.084035766s 1.085103123s 1.085170234s 1.088315726s 1.092339819s 1.096564458s 1.096900045s 1.101566479s 1.102311148s 1.102404782s 1.102561736s 1.102656179s 1.104170302s 1.10819169s 1.109108792s 1.110347629s 1.110432865s 1.112372581s 1.11329481s 1.113433754s 1.113498194s 1.115368779s 1.117291543s 1.12667941s 1.129165353s 1.131431396s 1.132916678s 1.133650511s 1.139400953s 1.143677381s 1.145998335s 1.14696111s 1.150861041s 1.15093867s 1.156778287s 1.163895895s 1.175557289s 1.176521144s 1.176983795s 1.181457532s 1.182511633s 1.186403884s 1.189801275s 1.199268953s 1.20560719s 1.2063249s 1.210902231s 1.21350202s 1.218505837s 1.224518042s 1.230684767s 1.235332836s 1.240528152s 1.246210923s 1.25001408s 1.254132364s 1.262398344s 1.269687649s 1.272200261s 1.275745025s 1.283013234s 1.287206299s 1.289490881s 1.289706724s 1.290913328s 1.296875462s 1.298366772s 1.29963357s 1.303151204s 1.317443161s 1.357183489s 1.360413244s] May 4 11:30:09.324: INFO: 50 %ile: 1.040278903s May 4 11:30:09.324: INFO: 90 %ile: 1.240528152s May 4 11:30:09.324: INFO: 99 %ile: 1.357183489s May 4 11:30:09.324: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:30:09.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2796" for this suite. • [SLOW TEST:17.119 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":275,"completed":82,"skipped":1514,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:30:09.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0504 11:30:10.640203 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 4 11:30:10.640: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:30:10.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4763" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":83,"skipped":1524,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:30:10.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC May 4 11:30:10.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3613' May 4 11:30:11.175: INFO: stderr: "" May 4 11:30:11.175: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 4 11:30:12.192: INFO: Selector matched 1 pods for map[app:agnhost] May 4 11:30:12.192: INFO: Found 0 / 1 May 4 11:30:13.213: INFO: Selector matched 1 pods for map[app:agnhost] May 4 11:30:13.213: INFO: Found 0 / 1 May 4 11:30:14.214: INFO: Selector matched 1 pods for map[app:agnhost] May 4 11:30:14.214: INFO: Found 0 / 1 May 4 11:30:15.412: INFO: Selector matched 1 pods for map[app:agnhost] May 4 11:30:15.412: INFO: Found 0 / 1 May 4 11:30:16.255: INFO: Selector matched 1 pods for map[app:agnhost] May 4 11:30:16.255: INFO: Found 1 / 1 May 4 11:30:16.255: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 4 11:30:16.319: INFO: Selector matched 1 pods for map[app:agnhost] May 4 11:30:16.319: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 4 11:30:16.319: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config patch pod agnhost-master-nhqks --namespace=kubectl-3613 -p {"metadata":{"annotations":{"x":"y"}}}' May 4 11:30:16.709: INFO: stderr: "" May 4 11:30:16.709: INFO: stdout: "pod/agnhost-master-nhqks patched\n" STEP: checking annotations May 4 11:30:16.784: INFO: Selector matched 1 pods for map[app:agnhost] May 4 11:30:16.784: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:30:16.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3613" for this suite. • [SLOW TEST:6.164 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":84,"skipped":1527,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:30:16.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-43b0c145-b66a-46e2-9932-7c9229dbcd94 in namespace container-probe-6352 May 4 11:30:21.578: INFO: Started pod liveness-43b0c145-b66a-46e2-9932-7c9229dbcd94 in namespace container-probe-6352 STEP: checking the pod's current state and verifying that restartCount is present May 4 11:30:21.580: INFO: Initial restart count of pod liveness-43b0c145-b66a-46e2-9932-7c9229dbcd94 is 0 May 4 11:30:44.361: INFO: Restart count of pod container-probe-6352/liveness-43b0c145-b66a-46e2-9932-7c9229dbcd94 is now 1 (22.780634083s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:30:44.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6352" for this suite. • [SLOW TEST:27.707 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1549,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:30:44.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 11:30:46.212: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 11:30:48.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188646, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188646, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188646, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188646, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 11:30:51.257: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:30:51.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5721" for this suite. STEP: Destroying namespace "webhook-5721-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.365 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":86,"skipped":1553,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:30:51.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs May 4 11:30:51.998: INFO: Waiting up to 5m0s for pod "pod-263876c5-30bc-4637-86db-195b620dffba" in namespace "emptydir-217" to be "Succeeded or Failed" May 4 11:30:52.018: INFO: Pod "pod-263876c5-30bc-4637-86db-195b620dffba": Phase="Pending", Reason="", readiness=false. Elapsed: 19.717615ms May 4 11:30:54.022: INFO: Pod "pod-263876c5-30bc-4637-86db-195b620dffba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023386716s May 4 11:30:56.026: INFO: Pod "pod-263876c5-30bc-4637-86db-195b620dffba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027716108s STEP: Saw pod success May 4 11:30:56.026: INFO: Pod "pod-263876c5-30bc-4637-86db-195b620dffba" satisfied condition "Succeeded or Failed" May 4 11:30:56.029: INFO: Trying to get logs from node kali-worker2 pod pod-263876c5-30bc-4637-86db-195b620dffba container test-container: STEP: delete the pod May 4 11:30:56.062: INFO: Waiting for pod pod-263876c5-30bc-4637-86db-195b620dffba to disappear May 4 11:30:56.094: INFO: Pod pod-263876c5-30bc-4637-86db-195b620dffba no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:30:56.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-217" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1574,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:30:56.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 4 11:30:56.346: INFO: Waiting up to 5m0s for pod "downwardapi-volume-89204a70-2e15-48c5-9ba9-434535922891" in namespace "projected-7122" to be "Succeeded or Failed" May 4 11:30:56.362: INFO: Pod "downwardapi-volume-89204a70-2e15-48c5-9ba9-434535922891": Phase="Pending", Reason="", readiness=false. Elapsed: 15.670518ms May 4 11:30:58.366: INFO: Pod "downwardapi-volume-89204a70-2e15-48c5-9ba9-434535922891": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020137159s May 4 11:31:00.370: INFO: Pod "downwardapi-volume-89204a70-2e15-48c5-9ba9-434535922891": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024294057s STEP: Saw pod success May 4 11:31:00.370: INFO: Pod "downwardapi-volume-89204a70-2e15-48c5-9ba9-434535922891" satisfied condition "Succeeded or Failed" May 4 11:31:00.374: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-89204a70-2e15-48c5-9ba9-434535922891 container client-container: STEP: delete the pod May 4 11:31:00.412: INFO: Waiting for pod downwardapi-volume-89204a70-2e15-48c5-9ba9-434535922891 to disappear May 4 11:31:00.423: INFO: Pod downwardapi-volume-89204a70-2e15-48c5-9ba9-434535922891 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:31:00.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7122" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1592,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:31:00.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 4 11:31:00.509: INFO: Waiting up to 5m0s for pod "downward-api-241b01dc-2e4b-484a-8b09-c5d872fdfabe" in namespace "downward-api-5545" to be "Succeeded or Failed" May 4 11:31:00.530: INFO: Pod "downward-api-241b01dc-2e4b-484a-8b09-c5d872fdfabe": Phase="Pending", Reason="", readiness=false. Elapsed: 20.731844ms May 4 11:31:02.534: INFO: Pod "downward-api-241b01dc-2e4b-484a-8b09-c5d872fdfabe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024909125s May 4 11:31:04.539: INFO: Pod "downward-api-241b01dc-2e4b-484a-8b09-c5d872fdfabe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029768654s STEP: Saw pod success May 4 11:31:04.539: INFO: Pod "downward-api-241b01dc-2e4b-484a-8b09-c5d872fdfabe" satisfied condition "Succeeded or Failed" May 4 11:31:04.542: INFO: Trying to get logs from node kali-worker pod downward-api-241b01dc-2e4b-484a-8b09-c5d872fdfabe container dapi-container: STEP: delete the pod May 4 11:31:04.574: INFO: Waiting for pod downward-api-241b01dc-2e4b-484a-8b09-c5d872fdfabe to disappear May 4 11:31:04.578: INFO: Pod downward-api-241b01dc-2e4b-484a-8b09-c5d872fdfabe no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:31:04.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5545" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1631,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:31:04.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:31:04.654: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 4 11:31:07.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4575 create -f -' May 4 11:31:10.776: INFO: stderr: "" May 4 11:31:10.776: INFO: stdout: "e2e-test-crd-publish-openapi-3933-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 4 11:31:10.776: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4575 delete e2e-test-crd-publish-openapi-3933-crds test-cr' May 4 11:31:10.909: INFO: stderr: "" May 4 11:31:10.909: INFO: stdout: "e2e-test-crd-publish-openapi-3933-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 4 11:31:10.909: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4575 apply -f -' May 4 11:31:11.159: INFO: stderr: "" May 4 11:31:11.159: INFO: stdout: "e2e-test-crd-publish-openapi-3933-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 4 11:31:11.159: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4575 delete e2e-test-crd-publish-openapi-3933-crds test-cr' May 4 11:31:11.291: INFO: stderr: "" May 4 11:31:11.292: INFO: stdout: "e2e-test-crd-publish-openapi-3933-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 4 11:31:11.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3933-crds' May 4 11:31:11.573: INFO: stderr: "" May 4 11:31:11.573: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3933-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:31:13.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4575" for this suite. • [SLOW TEST:9.011 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":90,"skipped":1654,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:31:13.597: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:31:19.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-2470" for this suite. STEP: Destroying namespace "nsdeletetest-5968" for this suite. May 4 11:31:19.951: INFO: Namespace nsdeletetest-5968 was already deleted STEP: Destroying namespace "nsdeletetest-9142" for this suite. • [SLOW TEST:6.357 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":91,"skipped":1660,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:31:19.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on node default medium May 4 11:31:20.030: INFO: Waiting up to 5m0s for pod "pod-2defb530-2085-456e-9950-72768ccb0e6f" in namespace "emptydir-5356" to be "Succeeded or Failed" May 4 11:31:20.050: INFO: Pod "pod-2defb530-2085-456e-9950-72768ccb0e6f": Phase="Pending", Reason="", readiness=false. Elapsed: 20.715877ms May 4 11:31:22.054: INFO: Pod "pod-2defb530-2085-456e-9950-72768ccb0e6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024129563s May 4 11:31:24.058: INFO: Pod "pod-2defb530-2085-456e-9950-72768ccb0e6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028136401s STEP: Saw pod success May 4 11:31:24.058: INFO: Pod "pod-2defb530-2085-456e-9950-72768ccb0e6f" satisfied condition "Succeeded or Failed" May 4 11:31:24.060: INFO: Trying to get logs from node kali-worker pod pod-2defb530-2085-456e-9950-72768ccb0e6f container test-container: STEP: delete the pod May 4 11:31:24.100: INFO: Waiting for pod pod-2defb530-2085-456e-9950-72768ccb0e6f to disappear May 4 11:31:24.107: INFO: Pod pod-2defb530-2085-456e-9950-72768ccb0e6f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:31:24.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5356" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":92,"skipped":1676,"failed":0} SSSSSSSSSS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:31:24.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange May 4 11:31:24.212: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values May 4 11:31:24.220: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 4 11:31:24.220: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange May 4 11:31:24.242: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] May 4 11:31:24.242: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange May 4 11:31:24.272: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] May 4 11:31:24.272: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted May 4 11:31:31.570: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:31:31.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-9205" for this suite. • [SLOW TEST:7.507 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":93,"skipped":1686,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:31:31.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 4 11:31:31.758: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f51dbf0f-975d-428b-8fe3-7fcf17c2b882" in namespace "downward-api-8937" to be "Succeeded or Failed" May 4 11:31:31.778: INFO: Pod "downwardapi-volume-f51dbf0f-975d-428b-8fe3-7fcf17c2b882": Phase="Pending", Reason="", readiness=false. Elapsed: 19.780566ms May 4 11:31:33.861: INFO: Pod "downwardapi-volume-f51dbf0f-975d-428b-8fe3-7fcf17c2b882": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102878576s May 4 11:31:35.866: INFO: Pod "downwardapi-volume-f51dbf0f-975d-428b-8fe3-7fcf17c2b882": Phase="Running", Reason="", readiness=true. Elapsed: 4.107791057s May 4 11:31:37.890: INFO: Pod "downwardapi-volume-f51dbf0f-975d-428b-8fe3-7fcf17c2b882": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.132297357s STEP: Saw pod success May 4 11:31:37.890: INFO: Pod "downwardapi-volume-f51dbf0f-975d-428b-8fe3-7fcf17c2b882" satisfied condition "Succeeded or Failed" May 4 11:31:37.893: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-f51dbf0f-975d-428b-8fe3-7fcf17c2b882 container client-container: STEP: delete the pod May 4 11:31:38.024: INFO: Waiting for pod downwardapi-volume-f51dbf0f-975d-428b-8fe3-7fcf17c2b882 to disappear May 4 11:31:38.035: INFO: Pod downwardapi-volume-f51dbf0f-975d-428b-8fe3-7fcf17c2b882 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:31:38.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8937" for this suite. • [SLOW TEST:6.422 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1692,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:31:38.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-6f0fc066-b2b3-477a-a8d4-849838585b36 STEP: Creating a pod to test consume configMaps May 4 11:31:38.517: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ab63f8dd-0a43-4225-8725-93b75613a441" in namespace "projected-9818" to be "Succeeded or Failed" May 4 11:31:38.529: INFO: Pod "pod-projected-configmaps-ab63f8dd-0a43-4225-8725-93b75613a441": Phase="Pending", Reason="", readiness=false. Elapsed: 12.334479ms May 4 11:31:40.533: INFO: Pod "pod-projected-configmaps-ab63f8dd-0a43-4225-8725-93b75613a441": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016601583s May 4 11:31:42.537: INFO: Pod "pod-projected-configmaps-ab63f8dd-0a43-4225-8725-93b75613a441": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020250065s STEP: Saw pod success May 4 11:31:42.537: INFO: Pod "pod-projected-configmaps-ab63f8dd-0a43-4225-8725-93b75613a441" satisfied condition "Succeeded or Failed" May 4 11:31:42.540: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-ab63f8dd-0a43-4225-8725-93b75613a441 container projected-configmap-volume-test: STEP: delete the pod May 4 11:31:42.638: INFO: Waiting for pod pod-projected-configmaps-ab63f8dd-0a43-4225-8725-93b75613a441 to disappear May 4 11:31:42.652: INFO: Pod pod-projected-configmaps-ab63f8dd-0a43-4225-8725-93b75613a441 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:31:42.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9818" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1716,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:31:42.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 4 11:31:42.797: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2df36a29-aa55-4f23-97dc-df8d31229553" in namespace "downward-api-5033" to be "Succeeded or Failed" May 4 11:31:42.808: INFO: Pod "downwardapi-volume-2df36a29-aa55-4f23-97dc-df8d31229553": Phase="Pending", Reason="", readiness=false. Elapsed: 10.453828ms May 4 11:31:44.824: INFO: Pod "downwardapi-volume-2df36a29-aa55-4f23-97dc-df8d31229553": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026628399s May 4 11:31:46.828: INFO: Pod "downwardapi-volume-2df36a29-aa55-4f23-97dc-df8d31229553": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030504278s STEP: Saw pod success May 4 11:31:46.828: INFO: Pod "downwardapi-volume-2df36a29-aa55-4f23-97dc-df8d31229553" satisfied condition "Succeeded or Failed" May 4 11:31:46.830: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-2df36a29-aa55-4f23-97dc-df8d31229553 container client-container: STEP: delete the pod May 4 11:31:46.916: INFO: Waiting for pod downwardapi-volume-2df36a29-aa55-4f23-97dc-df8d31229553 to disappear May 4 11:31:46.928: INFO: Pod downwardapi-volume-2df36a29-aa55-4f23-97dc-df8d31229553 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:31:46.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5033" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1739,"failed":0} SSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:31:46.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:31:47.058: INFO: Waiting up to 5m0s for pod "busybox-user-65534-82291ff4-0062-498c-84b7-146855e47dd0" in namespace "security-context-test-3243" to be "Succeeded or Failed" May 4 11:31:47.061: INFO: Pod "busybox-user-65534-82291ff4-0062-498c-84b7-146855e47dd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.95065ms May 4 11:31:49.066: INFO: Pod "busybox-user-65534-82291ff4-0062-498c-84b7-146855e47dd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008148417s May 4 11:31:51.072: INFO: Pod "busybox-user-65534-82291ff4-0062-498c-84b7-146855e47dd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.013469452s May 4 11:31:53.076: INFO: Pod "busybox-user-65534-82291ff4-0062-498c-84b7-146855e47dd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017718909s May 4 11:31:53.076: INFO: Pod "busybox-user-65534-82291ff4-0062-498c-84b7-146855e47dd0" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:31:53.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3243" for this suite. • [SLOW TEST:6.151 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1746,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:31:53.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:32:10.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-413" for this suite. • [SLOW TEST:17.165 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":98,"skipped":1769,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:32:10.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 11:32:10.704: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 11:32:12.965: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188730, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188730, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188730, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724188730, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 11:32:16.003: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:32:28.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3670" for this suite. STEP: Destroying namespace "webhook-3670-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.044 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":99,"skipped":1773,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:32:28.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 4 11:32:32.876: INFO: Successfully updated pod "adopt-release-28xhq" STEP: Checking that the Job readopts the Pod May 4 11:32:32.876: INFO: Waiting up to 15m0s for pod "adopt-release-28xhq" in namespace "job-5809" to be "adopted" May 4 11:32:32.882: INFO: Pod "adopt-release-28xhq": Phase="Running", Reason="", readiness=true. Elapsed: 5.010502ms May 4 11:32:34.885: INFO: Pod "adopt-release-28xhq": Phase="Running", Reason="", readiness=true. Elapsed: 2.00887079s May 4 11:32:34.885: INFO: Pod "adopt-release-28xhq" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 4 11:32:35.394: INFO: Successfully updated pod "adopt-release-28xhq" STEP: Checking that the Job releases the Pod May 4 11:32:35.394: INFO: Waiting up to 15m0s for pod "adopt-release-28xhq" in namespace "job-5809" to be "released" May 4 11:32:35.466: INFO: Pod "adopt-release-28xhq": Phase="Running", Reason="", readiness=true. Elapsed: 72.582255ms May 4 11:32:37.501: INFO: Pod "adopt-release-28xhq": Phase="Running", Reason="", readiness=true. Elapsed: 2.107482243s May 4 11:32:37.501: INFO: Pod "adopt-release-28xhq" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:32:37.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-5809" for this suite. • [SLOW TEST:9.213 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":100,"skipped":1782,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:32:37.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-4378/secret-test-d16ed2fb-7990-49be-b640-50678457635f STEP: Creating a pod to test consume secrets May 4 11:32:37.772: INFO: Waiting up to 5m0s for pod "pod-configmaps-2b4145c3-05ee-41b5-aff3-afc1782835b9" in namespace "secrets-4378" to be "Succeeded or Failed" May 4 11:32:37.779: INFO: Pod "pod-configmaps-2b4145c3-05ee-41b5-aff3-afc1782835b9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.076767ms May 4 11:32:40.646: INFO: Pod "pod-configmaps-2b4145c3-05ee-41b5-aff3-afc1782835b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.874036453s May 4 11:32:42.700: INFO: Pod "pod-configmaps-2b4145c3-05ee-41b5-aff3-afc1782835b9": Phase="Running", Reason="", readiness=true. Elapsed: 4.927294085s May 4 11:32:44.704: INFO: Pod "pod-configmaps-2b4145c3-05ee-41b5-aff3-afc1782835b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.931832533s STEP: Saw pod success May 4 11:32:44.704: INFO: Pod "pod-configmaps-2b4145c3-05ee-41b5-aff3-afc1782835b9" satisfied condition "Succeeded or Failed" May 4 11:32:44.707: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-2b4145c3-05ee-41b5-aff3-afc1782835b9 container env-test: STEP: delete the pod May 4 11:32:44.779: INFO: Waiting for pod pod-configmaps-2b4145c3-05ee-41b5-aff3-afc1782835b9 to disappear May 4 11:32:44.792: INFO: Pod pod-configmaps-2b4145c3-05ee-41b5-aff3-afc1782835b9 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:32:44.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4378" for this suite. • [SLOW TEST:7.292 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1803,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:32:44.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-e8a7c6cc-c1b8-4e05-8230-539ed274d856 STEP: Creating a pod to test consume secrets May 4 11:32:44.938: INFO: Waiting up to 5m0s for pod "pod-secrets-ff966c45-dae7-48c8-8afe-f4004c8f09d5" in namespace "secrets-1926" to be "Succeeded or Failed" May 4 11:32:44.941: INFO: Pod "pod-secrets-ff966c45-dae7-48c8-8afe-f4004c8f09d5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.334172ms May 4 11:32:46.946: INFO: Pod "pod-secrets-ff966c45-dae7-48c8-8afe-f4004c8f09d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007926998s May 4 11:32:48.970: INFO: Pod "pod-secrets-ff966c45-dae7-48c8-8afe-f4004c8f09d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032421749s STEP: Saw pod success May 4 11:32:48.970: INFO: Pod "pod-secrets-ff966c45-dae7-48c8-8afe-f4004c8f09d5" satisfied condition "Succeeded or Failed" May 4 11:32:48.973: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-ff966c45-dae7-48c8-8afe-f4004c8f09d5 container secret-volume-test: STEP: delete the pod May 4 11:32:49.042: INFO: Waiting for pod pod-secrets-ff966c45-dae7-48c8-8afe-f4004c8f09d5 to disappear May 4 11:32:49.062: INFO: Pod pod-secrets-ff966c45-dae7-48c8-8afe-f4004c8f09d5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:32:49.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1926" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1822,"failed":0} SSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:32:49.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 4 11:32:53.803: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b78a3160-b5bf-4767-a258-eafd1c9c3b5e" May 4 11:32:53.803: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b78a3160-b5bf-4767-a258-eafd1c9c3b5e" in namespace "pods-2217" to be "terminated due to deadline exceeded" May 4 11:32:53.861: INFO: Pod "pod-update-activedeadlineseconds-b78a3160-b5bf-4767-a258-eafd1c9c3b5e": Phase="Running", Reason="", readiness=true. Elapsed: 57.896567ms May 4 11:32:55.865: INFO: Pod "pod-update-activedeadlineseconds-b78a3160-b5bf-4767-a258-eafd1c9c3b5e": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.062179632s May 4 11:32:55.865: INFO: Pod "pod-update-activedeadlineseconds-b78a3160-b5bf-4767-a258-eafd1c9c3b5e" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:32:55.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2217" for this suite. • [SLOW TEST:6.767 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1826,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:32:55.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-77147d65-3aa5-4870-bc83-eff98bdd2d97 in namespace container-probe-4792 May 4 11:32:59.959: INFO: Started pod liveness-77147d65-3aa5-4870-bc83-eff98bdd2d97 in namespace container-probe-4792 STEP: checking the pod's current state and verifying that restartCount is present May 4 11:32:59.963: INFO: Initial restart count of pod liveness-77147d65-3aa5-4870-bc83-eff98bdd2d97 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:37:02.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4792" for this suite. • [SLOW TEST:246.733 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1847,"failed":0} SSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:37:02.609: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars May 4 11:37:02.950: INFO: Waiting up to 5m0s for pod "downward-api-5d93a3e5-00ef-407d-926d-cc5d9d4b7495" in namespace "downward-api-2733" to be "Succeeded or Failed" May 4 11:37:03.016: INFO: Pod "downward-api-5d93a3e5-00ef-407d-926d-cc5d9d4b7495": Phase="Pending", Reason="", readiness=false. Elapsed: 65.392765ms May 4 11:37:05.026: INFO: Pod "downward-api-5d93a3e5-00ef-407d-926d-cc5d9d4b7495": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075631496s May 4 11:37:07.031: INFO: Pod "downward-api-5d93a3e5-00ef-407d-926d-cc5d9d4b7495": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080058652s STEP: Saw pod success May 4 11:37:07.031: INFO: Pod "downward-api-5d93a3e5-00ef-407d-926d-cc5d9d4b7495" satisfied condition "Succeeded or Failed" May 4 11:37:07.033: INFO: Trying to get logs from node kali-worker2 pod downward-api-5d93a3e5-00ef-407d-926d-cc5d9d4b7495 container dapi-container: STEP: delete the pod May 4 11:37:07.109: INFO: Waiting for pod downward-api-5d93a3e5-00ef-407d-926d-cc5d9d4b7495 to disappear May 4 11:37:07.151: INFO: Pod downward-api-5d93a3e5-00ef-407d-926d-cc5d9d4b7495 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:37:07.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2733" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1855,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:37:07.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:37:07.212: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:37:13.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8891" for this suite. • [SLOW TEST:6.329 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":106,"skipped":1900,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:37:13.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 4 11:37:13.579: INFO: Pod name pod-release: Found 0 pods out of 1 May 4 11:37:18.582: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:37:18.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7088" for this suite. • [SLOW TEST:5.304 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":107,"skipped":1913,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:37:18.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-2753/configmap-test-ecb446cb-0c71-4464-a56c-3f8a3d4e46b6 STEP: Creating a pod to test consume configMaps May 4 11:37:19.014: INFO: Waiting up to 5m0s for pod "pod-configmaps-62d38eae-223f-4a20-904c-def94f478a98" in namespace "configmap-2753" to be "Succeeded or Failed" May 4 11:37:19.043: INFO: Pod "pod-configmaps-62d38eae-223f-4a20-904c-def94f478a98": Phase="Pending", Reason="", readiness=false. Elapsed: 28.612663ms May 4 11:37:21.157: INFO: Pod "pod-configmaps-62d38eae-223f-4a20-904c-def94f478a98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142583231s May 4 11:37:23.161: INFO: Pod "pod-configmaps-62d38eae-223f-4a20-904c-def94f478a98": Phase="Running", Reason="", readiness=true. Elapsed: 4.147278456s May 4 11:37:25.228: INFO: Pod "pod-configmaps-62d38eae-223f-4a20-904c-def94f478a98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.214463735s STEP: Saw pod success May 4 11:37:25.229: INFO: Pod "pod-configmaps-62d38eae-223f-4a20-904c-def94f478a98" satisfied condition "Succeeded or Failed" May 4 11:37:25.243: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-62d38eae-223f-4a20-904c-def94f478a98 container env-test: STEP: delete the pod May 4 11:37:25.823: INFO: Waiting for pod pod-configmaps-62d38eae-223f-4a20-904c-def94f478a98 to disappear May 4 11:37:25.859: INFO: Pod pod-configmaps-62d38eae-223f-4a20-904c-def94f478a98 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:37:25.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2753" for this suite. • [SLOW TEST:7.117 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1959,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:37:25.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 11:37:26.921: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 11:37:28.931: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189047, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189047, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189047, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189046, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 11:37:31.968: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 4 11:37:36.077: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config attach --namespace=webhook-5040 to-be-attached-pod -i -c=container1' May 4 11:37:36.187: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:37:36.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5040" for this suite. STEP: Destroying namespace "webhook-5040-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.436 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":109,"skipped":1983,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:37:36.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin May 4 11:37:36.446: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b972215a-cdee-49e7-bb42-904cbc907af4" in namespace "projected-9000" to be "Succeeded or Failed" May 4 11:37:36.489: INFO: Pod "downwardapi-volume-b972215a-cdee-49e7-bb42-904cbc907af4": Phase="Pending", Reason="", readiness=false. Elapsed: 43.788999ms May 4 11:37:38.493: INFO: Pod "downwardapi-volume-b972215a-cdee-49e7-bb42-904cbc907af4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047442098s May 4 11:37:40.504: INFO: Pod "downwardapi-volume-b972215a-cdee-49e7-bb42-904cbc907af4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058430422s STEP: Saw pod success May 4 11:37:40.504: INFO: Pod "downwardapi-volume-b972215a-cdee-49e7-bb42-904cbc907af4" satisfied condition "Succeeded or Failed" May 4 11:37:40.507: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-b972215a-cdee-49e7-bb42-904cbc907af4 container client-container: STEP: delete the pod May 4 11:37:40.546: INFO: Waiting for pod downwardapi-volume-b972215a-cdee-49e7-bb42-904cbc907af4 to disappear May 4 11:37:40.558: INFO: Pod downwardapi-volume-b972215a-cdee-49e7-bb42-904cbc907af4 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:37:40.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9000" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":1990,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:37:40.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 4 11:37:41.105: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 4 11:37:43.205: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189061, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189061, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189061, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189061, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 4 11:37:46.276: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:37:46.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-421" for this suite. STEP: Destroying namespace "webhook-421-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.930 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":111,"skipped":1993,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:37:46.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-fjmw STEP: Creating a pod to test atomic-volume-subpath May 4 11:37:46.562: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-fjmw" in namespace "subpath-9394" to be "Succeeded or Failed" May 4 11:37:46.611: INFO: Pod "pod-subpath-test-secret-fjmw": Phase="Pending", Reason="", readiness=false. Elapsed: 49.117616ms May 4 11:37:48.614: INFO: Pod "pod-subpath-test-secret-fjmw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052074958s May 4 11:37:50.618: INFO: Pod "pod-subpath-test-secret-fjmw": Phase="Running", Reason="", readiness=true. Elapsed: 4.056283399s May 4 11:37:52.622: INFO: Pod "pod-subpath-test-secret-fjmw": Phase="Running", Reason="", readiness=true. Elapsed: 6.059980226s May 4 11:37:54.627: INFO: Pod "pod-subpath-test-secret-fjmw": Phase="Running", Reason="", readiness=true. Elapsed: 8.064446331s May 4 11:37:56.631: INFO: Pod "pod-subpath-test-secret-fjmw": Phase="Running", Reason="", readiness=true. Elapsed: 10.068957017s May 4 11:37:58.636: INFO: Pod "pod-subpath-test-secret-fjmw": Phase="Running", Reason="", readiness=true. Elapsed: 12.073537682s May 4 11:38:00.640: INFO: Pod "pod-subpath-test-secret-fjmw": Phase="Running", Reason="", readiness=true. Elapsed: 14.078008309s May 4 11:38:02.644: INFO: Pod "pod-subpath-test-secret-fjmw": Phase="Running", Reason="", readiness=true. Elapsed: 16.08147896s May 4 11:38:04.650: INFO: Pod "pod-subpath-test-secret-fjmw": Phase="Running", Reason="", readiness=true. Elapsed: 18.088074158s May 4 11:38:06.655: INFO: Pod "pod-subpath-test-secret-fjmw": Phase="Running", Reason="", readiness=true. Elapsed: 20.09276119s May 4 11:38:08.659: INFO: Pod "pod-subpath-test-secret-fjmw": Phase="Running", Reason="", readiness=true. Elapsed: 22.097318405s May 4 11:38:10.664: INFO: Pod "pod-subpath-test-secret-fjmw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.101563654s STEP: Saw pod success May 4 11:38:10.664: INFO: Pod "pod-subpath-test-secret-fjmw" satisfied condition "Succeeded or Failed" May 4 11:38:10.666: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-fjmw container test-container-subpath-secret-fjmw: STEP: delete the pod May 4 11:38:10.698: INFO: Waiting for pod pod-subpath-test-secret-fjmw to disappear May 4 11:38:10.713: INFO: Pod pod-subpath-test-secret-fjmw no longer exists STEP: Deleting pod pod-subpath-test-secret-fjmw May 4 11:38:10.713: INFO: Deleting pod "pod-subpath-test-secret-fjmw" in namespace "subpath-9394" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:38:10.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9394" for this suite. • [SLOW TEST:24.229 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":112,"skipped":1993,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:38:10.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:38:10.818: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 4 11:38:10.864: INFO: Pod name sample-pod: Found 0 pods out of 1 May 4 11:38:15.868: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 4 11:38:15.868: INFO: Creating deployment "test-rolling-update-deployment" May 4 11:38:15.872: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 4 11:38:15.881: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 4 11:38:17.903: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 4 11:38:17.906: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189095, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189095, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189096, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189095, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)} May 4 11:38:19.918: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 May 4 11:38:19.926: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-1038 /apis/apps/v1/namespaces/deployment-1038/deployments/test-rolling-update-deployment 272e9658-4697-4ccb-8502-8241c7a7698a 1427631 1 2020-05-04 11:38:15 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] [{e2e.test Update apps/v1 2020-05-04 11:38:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-04 11:38:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f5e228 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-04 11:38:15 +0000 UTC,LastTransitionTime:2020-05-04 11:38:15 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-05-04 11:38:19 +0000 UTC,LastTransitionTime:2020-05-04 11:38:15 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 4 11:38:19.930: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7 deployment-1038 /apis/apps/v1/namespaces/deployment-1038/replicasets/test-rolling-update-deployment-59d5cb45c7 ed677c89-6359-4206-8324-bcdb5df1717a 1427620 1 2020-05-04 11:38:15 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 272e9658-4697-4ccb-8502-8241c7a7698a 0xc002f5e797 0xc002f5e798}] [] [{kube-controller-manager Update apps/v1 2020-05-04 11:38:19 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 50 101 57 54 53 56 45 52 54 57 55 45 52 99 99 98 45 56 53 48 50 45 56 50 52 49 99 55 97 55 54 57 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002f5e828 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 4 11:38:19.930: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 4 11:38:19.930: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-1038 /apis/apps/v1/namespaces/deployment-1038/replicasets/test-rolling-update-controller db17b912-f461-4e5b-a083-05d40c2dd32d 1427630 2 2020-05-04 11:38:10 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 272e9658-4697-4ccb-8502-8241c7a7698a 0xc002f5e687 0xc002f5e688}] [] [{e2e.test Update apps/v1 2020-05-04 11:38:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-04 11:38:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 50 55 50 101 57 54 53 56 45 52 54 57 55 45 52 99 99 98 45 56 53 48 50 45 56 50 52 49 99 55 97 55 54 57 56 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002f5e728 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 4 11:38:19.936: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-6mtx6" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-6mtx6 test-rolling-update-deployment-59d5cb45c7- deployment-1038 /api/v1/namespaces/deployment-1038/pods/test-rolling-update-deployment-59d5cb45c7-6mtx6 79a174db-dddf-4b9f-a27b-03ffb36897be 1427619 0 2020-05-04 11:38:15 +0000 UTC map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 ed677c89-6359-4206-8324-bcdb5df1717a 0xc002f5ecf7 0xc002f5ecf8}] [] [{kube-controller-manager Update v1 2020-05-04 11:38:15 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 100 54 55 55 99 56 57 45 54 51 53 57 45 52 50 48 54 45 56 51 50 52 45 98 99 100 98 53 100 102 49 55 49 55 97 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 11:38:19 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 54 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bsn66,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bsn66,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bsn66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:38:16 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:38:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:38:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 11:38:15 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.161,StartTime:2020-05-04 11:38:16 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-04 11:38:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://b5a65ddced68eec6e80bf5349fb8d6f96c95eae399bf8ba54414aebd3a5c8fda,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.161,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:38:19.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1038" for this suite. • [SLOW TEST:9.223 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":113,"skipped":2002,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:38:19.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-db24ca21-7602-4dfe-90b8-22d388e008a6 STEP: Creating a pod to test consume secrets May 4 11:38:20.171: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1809222c-f0fa-4999-a290-001070fe9c97" in namespace "projected-4469" to be "Succeeded or Failed" May 4 11:38:20.182: INFO: Pod "pod-projected-secrets-1809222c-f0fa-4999-a290-001070fe9c97": Phase="Pending", Reason="", readiness=false. Elapsed: 10.575736ms May 4 11:38:22.187: INFO: Pod "pod-projected-secrets-1809222c-f0fa-4999-a290-001070fe9c97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015225727s May 4 11:38:24.191: INFO: Pod "pod-projected-secrets-1809222c-f0fa-4999-a290-001070fe9c97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019668048s STEP: Saw pod success May 4 11:38:24.191: INFO: Pod "pod-projected-secrets-1809222c-f0fa-4999-a290-001070fe9c97" satisfied condition "Succeeded or Failed" May 4 11:38:24.194: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-1809222c-f0fa-4999-a290-001070fe9c97 container projected-secret-volume-test: STEP: delete the pod May 4 11:38:24.226: INFO: Waiting for pod pod-projected-secrets-1809222c-f0fa-4999-a290-001070fe9c97 to disappear May 4 11:38:24.235: INFO: Pod pod-projected-secrets-1809222c-f0fa-4999-a290-001070fe9c97 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:38:24.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4469" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":2009,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:38:24.242: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-47fj STEP: Creating a pod to test atomic-volume-subpath May 4 11:38:24.384: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-47fj" in namespace "subpath-2082" to be "Succeeded or Failed" May 4 11:38:24.420: INFO: Pod "pod-subpath-test-projected-47fj": Phase="Pending", Reason="", readiness=false. Elapsed: 36.148105ms May 4 11:38:26.424: INFO: Pod "pod-subpath-test-projected-47fj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039724301s May 4 11:38:28.428: INFO: Pod "pod-subpath-test-projected-47fj": Phase="Running", Reason="", readiness=true. Elapsed: 4.0441559s May 4 11:38:30.432: INFO: Pod "pod-subpath-test-projected-47fj": Phase="Running", Reason="", readiness=true. Elapsed: 6.048450263s May 4 11:38:32.436: INFO: Pod "pod-subpath-test-projected-47fj": Phase="Running", Reason="", readiness=true. Elapsed: 8.052383565s May 4 11:38:34.440: INFO: Pod "pod-subpath-test-projected-47fj": Phase="Running", Reason="", readiness=true. Elapsed: 10.056204409s May 4 11:38:36.444: INFO: Pod "pod-subpath-test-projected-47fj": Phase="Running", Reason="", readiness=true. Elapsed: 12.060259166s May 4 11:38:38.449: INFO: Pod "pod-subpath-test-projected-47fj": Phase="Running", Reason="", readiness=true. Elapsed: 14.064667836s May 4 11:38:40.453: INFO: Pod "pod-subpath-test-projected-47fj": Phase="Running", Reason="", readiness=true. Elapsed: 16.069235786s May 4 11:38:42.457: INFO: Pod "pod-subpath-test-projected-47fj": Phase="Running", Reason="", readiness=true. Elapsed: 18.073433374s May 4 11:38:44.462: INFO: Pod "pod-subpath-test-projected-47fj": Phase="Running", Reason="", readiness=true. Elapsed: 20.078134223s May 4 11:38:46.466: INFO: Pod "pod-subpath-test-projected-47fj": Phase="Running", Reason="", readiness=true. Elapsed: 22.082076303s May 4 11:38:48.504: INFO: Pod "pod-subpath-test-projected-47fj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.120551711s STEP: Saw pod success May 4 11:38:48.504: INFO: Pod "pod-subpath-test-projected-47fj" satisfied condition "Succeeded or Failed" May 4 11:38:48.507: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-projected-47fj container test-container-subpath-projected-47fj: STEP: delete the pod May 4 11:38:48.544: INFO: Waiting for pod pod-subpath-test-projected-47fj to disappear May 4 11:38:48.554: INFO: Pod pod-subpath-test-projected-47fj no longer exists STEP: Deleting pod pod-subpath-test-projected-47fj May 4 11:38:48.554: INFO: Deleting pod "pod-subpath-test-projected-47fj" in namespace "subpath-2082" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:38:48.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2082" for this suite. • [SLOW TEST:24.324 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":115,"skipped":2012,"failed":0} [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:38:48.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 4 11:38:52.954: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-5877 PodName:pod-sharedvolume-155cf357-4c19-4cc1-8915-2c5899e4571c ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 4 11:38:52.954: INFO: >>> kubeConfig: /root/.kube/config I0504 11:38:52.995446 7 log.go:172] (0xc001e60840) (0xc001e17900) Create stream I0504 11:38:52.995496 7 log.go:172] (0xc001e60840) (0xc001e17900) Stream added, broadcasting: 1 I0504 11:38:53.000340 7 log.go:172] (0xc001e60840) Reply frame received for 1 I0504 11:38:53.000398 7 log.go:172] (0xc001e60840) (0xc001e179a0) Create stream I0504 11:38:53.000415 7 log.go:172] (0xc001e60840) (0xc001e179a0) Stream added, broadcasting: 3 I0504 11:38:53.001665 7 log.go:172] (0xc001e60840) Reply frame received for 3 I0504 11:38:53.001706 7 log.go:172] (0xc001e60840) (0xc001f4c000) Create stream I0504 11:38:53.001719 7 log.go:172] (0xc001e60840) (0xc001f4c000) Stream added, broadcasting: 5 I0504 11:38:53.002668 7 log.go:172] (0xc001e60840) Reply frame received for 5 I0504 11:38:53.061510 7 log.go:172] (0xc001e60840) Data frame received for 5 I0504 11:38:53.061552 7 log.go:172] (0xc001f4c000) (5) Data frame handling I0504 11:38:53.061579 7 log.go:172] (0xc001e60840) Data frame received for 3 I0504 11:38:53.061591 7 log.go:172] (0xc001e179a0) (3) Data frame handling I0504 11:38:53.061603 7 log.go:172] (0xc001e179a0) (3) Data frame sent I0504 11:38:53.061613 7 log.go:172] (0xc001e60840) Data frame received for 3 I0504 11:38:53.061622 7 log.go:172] (0xc001e179a0) (3) Data frame handling I0504 11:38:53.062559 7 log.go:172] (0xc001e60840) Data frame received for 1 I0504 11:38:53.062587 7 log.go:172] (0xc001e17900) (1) Data frame handling I0504 11:38:53.062610 7 log.go:172] (0xc001e17900) (1) Data frame sent I0504 11:38:53.062753 7 log.go:172] (0xc001e60840) (0xc001e17900) Stream removed, broadcasting: 1 I0504 11:38:53.062787 7 log.go:172] (0xc001e60840) Go away received I0504 11:38:53.062896 7 log.go:172] (0xc001e60840) (0xc001e17900) Stream removed, broadcasting: 1 I0504 11:38:53.062923 7 log.go:172] (0xc001e60840) (0xc001e179a0) Stream removed, broadcasting: 3 I0504 11:38:53.062937 7 log.go:172] (0xc001e60840) (0xc001f4c000) Stream removed, broadcasting: 5 May 4 11:38:53.062: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:38:53.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5877" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":116,"skipped":2012,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:38:53.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 4 11:38:53.204: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 4 11:38:53.234: INFO: Waiting for terminating namespaces to be deleted... May 4 11:38:53.236: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 4 11:38:53.247: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:38:53.247: INFO: Container kube-proxy ready: true, restart count 0 May 4 11:38:53.247: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:38:53.247: INFO: Container kindnet-cni ready: true, restart count 1 May 4 11:38:53.247: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 4 11:38:53.252: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:38:53.252: INFO: Container kindnet-cni ready: true, restart count 0 May 4 11:38:53.252: INFO: pod-sharedvolume-155cf357-4c19-4cc1-8915-2c5899e4571c from emptydir-5877 started at 2020-05-04 11:38:49 +0000 UTC (2 container statuses recorded) May 4 11:38:53.252: INFO: Container busybox-main-container ready: true, restart count 0 May 4 11:38:53.252: INFO: Container busybox-sub-container ready: false, restart count 0 May 4 11:38:53.252: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:38:53.252: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-80a89b91-10dc-44e1-ab44-73a863aebecc 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-80a89b91-10dc-44e1-ab44-73a863aebecc off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-80a89b91-10dc-44e1-ab44-73a863aebecc [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:44:01.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6704" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.530 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":117,"skipped":2015,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:44:01.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs May 4 11:44:01.691: INFO: Waiting up to 5m0s for pod "pod-8539c124-a650-46a3-bb7d-ba7ef84034e1" in namespace "emptydir-4501" to be "Succeeded or Failed" May 4 11:44:01.695: INFO: Pod "pod-8539c124-a650-46a3-bb7d-ba7ef84034e1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.408236ms May 4 11:44:03.698: INFO: Pod "pod-8539c124-a650-46a3-bb7d-ba7ef84034e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007394851s May 4 11:44:05.703: INFO: Pod "pod-8539c124-a650-46a3-bb7d-ba7ef84034e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012077361s STEP: Saw pod success May 4 11:44:05.703: INFO: Pod "pod-8539c124-a650-46a3-bb7d-ba7ef84034e1" satisfied condition "Succeeded or Failed" May 4 11:44:05.706: INFO: Trying to get logs from node kali-worker pod pod-8539c124-a650-46a3-bb7d-ba7ef84034e1 container test-container: STEP: delete the pod May 4 11:44:05.732: INFO: Waiting for pod pod-8539c124-a650-46a3-bb7d-ba7ef84034e1 to disappear May 4 11:44:05.736: INFO: Pod pod-8539c124-a650-46a3-bb7d-ba7ef84034e1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:44:05.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4501" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":118,"skipped":2035,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:44:05.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command May 4 11:44:05.852: INFO: Waiting up to 5m0s for pod "var-expansion-d0862320-f280-49f7-8c85-fdcc4536bc97" in namespace "var-expansion-3517" to be "Succeeded or Failed" May 4 11:44:05.856: INFO: Pod "var-expansion-d0862320-f280-49f7-8c85-fdcc4536bc97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057402ms May 4 11:44:07.968: INFO: Pod "var-expansion-d0862320-f280-49f7-8c85-fdcc4536bc97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116107122s May 4 11:44:09.972: INFO: Pod "var-expansion-d0862320-f280-49f7-8c85-fdcc4536bc97": Phase="Running", Reason="", readiness=true. Elapsed: 4.120426765s May 4 11:44:11.977: INFO: Pod "var-expansion-d0862320-f280-49f7-8c85-fdcc4536bc97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12520401s STEP: Saw pod success May 4 11:44:11.977: INFO: Pod "var-expansion-d0862320-f280-49f7-8c85-fdcc4536bc97" satisfied condition "Succeeded or Failed" May 4 11:44:11.980: INFO: Trying to get logs from node kali-worker pod var-expansion-d0862320-f280-49f7-8c85-fdcc4536bc97 container dapi-container: STEP: delete the pod May 4 11:44:12.016: INFO: Waiting for pod var-expansion-d0862320-f280-49f7-8c85-fdcc4536bc97 to disappear May 4 11:44:12.032: INFO: Pod var-expansion-d0862320-f280-49f7-8c85-fdcc4536bc97 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:44:12.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3517" for this suite. • [SLOW TEST:6.297 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":119,"skipped":2084,"failed":0} SSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:44:12.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5520 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5520 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5520 May 4 11:44:12.199: INFO: Found 0 stateful pods, waiting for 1 May 4 11:44:22.204: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 4 11:44:22.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5520 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 11:44:25.414: INFO: stderr: "I0504 11:44:25.299954 1647 log.go:172] (0xc0008aed10) (0xc000892280) Create stream\nI0504 11:44:25.299987 1647 log.go:172] (0xc0008aed10) (0xc000892280) Stream added, broadcasting: 1\nI0504 11:44:25.302813 1647 log.go:172] (0xc0008aed10) Reply frame received for 1\nI0504 11:44:25.302866 1647 log.go:172] (0xc0008aed10) (0xc0006b72c0) Create stream\nI0504 11:44:25.302882 1647 log.go:172] (0xc0008aed10) (0xc0006b72c0) Stream added, broadcasting: 3\nI0504 11:44:25.303725 1647 log.go:172] (0xc0008aed10) Reply frame received for 3\nI0504 11:44:25.303748 1647 log.go:172] (0xc0008aed10) (0xc0006b7540) Create stream\nI0504 11:44:25.303755 1647 log.go:172] (0xc0008aed10) (0xc0006b7540) Stream added, broadcasting: 5\nI0504 11:44:25.304650 1647 log.go:172] (0xc0008aed10) Reply frame received for 5\nI0504 11:44:25.376755 1647 log.go:172] (0xc0008aed10) Data frame received for 5\nI0504 11:44:25.376778 1647 log.go:172] (0xc0006b7540) (5) Data frame handling\nI0504 11:44:25.376791 1647 log.go:172] (0xc0006b7540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0504 11:44:25.404959 1647 log.go:172] (0xc0008aed10) Data frame received for 3\nI0504 11:44:25.404999 1647 log.go:172] (0xc0006b72c0) (3) Data frame handling\nI0504 11:44:25.405022 1647 log.go:172] (0xc0006b72c0) (3) Data frame sent\nI0504 11:44:25.405691 1647 log.go:172] (0xc0008aed10) Data frame received for 5\nI0504 11:44:25.405731 1647 log.go:172] (0xc0006b7540) (5) Data frame handling\nI0504 11:44:25.405767 1647 log.go:172] (0xc0008aed10) Data frame received for 3\nI0504 11:44:25.405801 1647 log.go:172] (0xc0006b72c0) (3) Data frame handling\nI0504 11:44:25.407700 1647 log.go:172] (0xc0008aed10) Data frame received for 1\nI0504 11:44:25.407727 1647 log.go:172] (0xc000892280) (1) Data frame handling\nI0504 11:44:25.407753 1647 log.go:172] (0xc000892280) (1) Data frame sent\nI0504 11:44:25.407786 1647 log.go:172] (0xc0008aed10) (0xc000892280) Stream removed, broadcasting: 1\nI0504 11:44:25.407815 1647 log.go:172] (0xc0008aed10) Go away received\nI0504 11:44:25.408245 1647 log.go:172] (0xc0008aed10) (0xc000892280) Stream removed, broadcasting: 1\nI0504 11:44:25.408270 1647 log.go:172] (0xc0008aed10) (0xc0006b72c0) Stream removed, broadcasting: 3\nI0504 11:44:25.408284 1647 log.go:172] (0xc0008aed10) (0xc0006b7540) Stream removed, broadcasting: 5\n" May 4 11:44:25.414: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 11:44:25.414: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 11:44:25.418: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 4 11:44:35.423: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 4 11:44:35.423: INFO: Waiting for statefulset status.replicas updated to 0 May 4 11:44:35.446: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998962s May 4 11:44:36.452: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.987551235s May 4 11:44:37.456: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.981939627s May 4 11:44:38.462: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.97745262s May 4 11:44:39.467: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.971929251s May 4 11:44:40.472: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.966886907s May 4 11:44:41.476: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.961534301s May 4 11:44:42.482: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.956758867s May 4 11:44:43.487: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.951035677s May 4 11:44:44.492: INFO: Verifying statefulset ss doesn't scale past 1 for another 946.790561ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5520 May 4 11:44:45.497: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5520 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:44:45.709: INFO: stderr: "I0504 11:44:45.636295 1681 log.go:172] (0xc00089ea50) (0xc0004d9540) Create stream\nI0504 11:44:45.636372 1681 log.go:172] (0xc00089ea50) (0xc0004d9540) Stream added, broadcasting: 1\nI0504 11:44:45.639073 1681 log.go:172] (0xc00089ea50) Reply frame received for 1\nI0504 11:44:45.639121 1681 log.go:172] (0xc00089ea50) (0xc00097e000) Create stream\nI0504 11:44:45.639133 1681 log.go:172] (0xc00089ea50) (0xc00097e000) Stream added, broadcasting: 3\nI0504 11:44:45.640258 1681 log.go:172] (0xc00089ea50) Reply frame received for 3\nI0504 11:44:45.640314 1681 log.go:172] (0xc00089ea50) (0xc000404000) Create stream\nI0504 11:44:45.640336 1681 log.go:172] (0xc00089ea50) (0xc000404000) Stream added, broadcasting: 5\nI0504 11:44:45.641640 1681 log.go:172] (0xc00089ea50) Reply frame received for 5\nI0504 11:44:45.702502 1681 log.go:172] (0xc00089ea50) Data frame received for 5\nI0504 11:44:45.702535 1681 log.go:172] (0xc000404000) (5) Data frame handling\nI0504 11:44:45.702543 1681 log.go:172] (0xc000404000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0504 11:44:45.702555 1681 log.go:172] (0xc00089ea50) Data frame received for 3\nI0504 11:44:45.702561 1681 log.go:172] (0xc00097e000) (3) Data frame handling\nI0504 11:44:45.702573 1681 log.go:172] (0xc00097e000) (3) Data frame sent\nI0504 11:44:45.702774 1681 log.go:172] (0xc00089ea50) Data frame received for 5\nI0504 11:44:45.702795 1681 log.go:172] (0xc000404000) (5) Data frame handling\nI0504 11:44:45.702811 1681 log.go:172] (0xc00089ea50) Data frame received for 3\nI0504 11:44:45.702828 1681 log.go:172] (0xc00097e000) (3) Data frame handling\nI0504 11:44:45.704792 1681 log.go:172] (0xc00089ea50) Data frame received for 1\nI0504 11:44:45.704809 1681 log.go:172] (0xc0004d9540) (1) Data frame handling\nI0504 11:44:45.704824 1681 log.go:172] (0xc0004d9540) (1) Data frame sent\nI0504 11:44:45.704835 1681 log.go:172] (0xc00089ea50) (0xc0004d9540) Stream removed, broadcasting: 1\nI0504 11:44:45.705000 1681 log.go:172] (0xc00089ea50) Go away received\nI0504 11:44:45.705309 1681 log.go:172] (0xc00089ea50) (0xc0004d9540) Stream removed, broadcasting: 1\nI0504 11:44:45.705329 1681 log.go:172] (0xc00089ea50) (0xc00097e000) Stream removed, broadcasting: 3\nI0504 11:44:45.705338 1681 log.go:172] (0xc00089ea50) (0xc000404000) Stream removed, broadcasting: 5\n" May 4 11:44:45.709: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 11:44:45.709: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 11:44:45.712: INFO: Found 1 stateful pods, waiting for 3 May 4 11:44:55.717: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 4 11:44:55.717: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 4 11:44:55.717: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 4 11:44:55.723: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5520 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 11:44:55.930: INFO: stderr: "I0504 11:44:55.856463 1702 log.go:172] (0xc0009b2000) (0xc000a12000) Create stream\nI0504 11:44:55.856513 1702 log.go:172] (0xc0009b2000) (0xc000a12000) Stream added, broadcasting: 1\nI0504 11:44:55.858636 1702 log.go:172] (0xc0009b2000) Reply frame received for 1\nI0504 11:44:55.858677 1702 log.go:172] (0xc0009b2000) (0xc000a120a0) Create stream\nI0504 11:44:55.858688 1702 log.go:172] (0xc0009b2000) (0xc000a120a0) Stream added, broadcasting: 3\nI0504 11:44:55.859509 1702 log.go:172] (0xc0009b2000) Reply frame received for 3\nI0504 11:44:55.859543 1702 log.go:172] (0xc0009b2000) (0xc0002e4aa0) Create stream\nI0504 11:44:55.859554 1702 log.go:172] (0xc0009b2000) (0xc0002e4aa0) Stream added, broadcasting: 5\nI0504 11:44:55.860415 1702 log.go:172] (0xc0009b2000) Reply frame received for 5\nI0504 11:44:55.922876 1702 log.go:172] (0xc0009b2000) Data frame received for 3\nI0504 11:44:55.922918 1702 log.go:172] (0xc000a120a0) (3) Data frame handling\nI0504 11:44:55.922929 1702 log.go:172] (0xc000a120a0) (3) Data frame sent\nI0504 11:44:55.922941 1702 log.go:172] (0xc0009b2000) Data frame received for 3\nI0504 11:44:55.922957 1702 log.go:172] (0xc0009b2000) Data frame received for 5\nI0504 11:44:55.922976 1702 log.go:172] (0xc0002e4aa0) (5) Data frame handling\nI0504 11:44:55.922992 1702 log.go:172] (0xc0002e4aa0) (5) Data frame sent\nI0504 11:44:55.923000 1702 log.go:172] (0xc0009b2000) Data frame received for 5\nI0504 11:44:55.923006 1702 log.go:172] (0xc0002e4aa0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0504 11:44:55.923021 1702 log.go:172] (0xc000a120a0) (3) Data frame handling\nI0504 11:44:55.924861 1702 log.go:172] (0xc0009b2000) Data frame received for 1\nI0504 11:44:55.924887 1702 log.go:172] (0xc000a12000) (1) Data frame handling\nI0504 11:44:55.924898 1702 log.go:172] (0xc000a12000) (1) Data frame sent\nI0504 11:44:55.924913 1702 log.go:172] (0xc0009b2000) (0xc000a12000) Stream removed, broadcasting: 1\nI0504 11:44:55.924932 1702 log.go:172] (0xc0009b2000) Go away received\nI0504 11:44:55.925372 1702 log.go:172] (0xc0009b2000) (0xc000a12000) Stream removed, broadcasting: 1\nI0504 11:44:55.925385 1702 log.go:172] (0xc0009b2000) (0xc000a120a0) Stream removed, broadcasting: 3\nI0504 11:44:55.925392 1702 log.go:172] (0xc0009b2000) (0xc0002e4aa0) Stream removed, broadcasting: 5\n" May 4 11:44:55.930: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 11:44:55.930: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 11:44:55.930: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5520 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 11:44:56.170: INFO: stderr: "I0504 11:44:56.061674 1723 log.go:172] (0xc00099c840) (0xc000731400) Create stream\nI0504 11:44:56.061762 1723 log.go:172] (0xc00099c840) (0xc000731400) Stream added, broadcasting: 1\nI0504 11:44:56.064747 1723 log.go:172] (0xc00099c840) Reply frame received for 1\nI0504 11:44:56.064807 1723 log.go:172] (0xc00099c840) (0xc0000c8000) Create stream\nI0504 11:44:56.064821 1723 log.go:172] (0xc00099c840) (0xc0000c8000) Stream added, broadcasting: 3\nI0504 11:44:56.066104 1723 log.go:172] (0xc00099c840) Reply frame received for 3\nI0504 11:44:56.066149 1723 log.go:172] (0xc00099c840) (0xc0007314a0) Create stream\nI0504 11:44:56.066171 1723 log.go:172] (0xc00099c840) (0xc0007314a0) Stream added, broadcasting: 5\nI0504 11:44:56.067280 1723 log.go:172] (0xc00099c840) Reply frame received for 5\nI0504 11:44:56.121949 1723 log.go:172] (0xc00099c840) Data frame received for 5\nI0504 11:44:56.121976 1723 log.go:172] (0xc0007314a0) (5) Data frame handling\nI0504 11:44:56.121990 1723 log.go:172] (0xc0007314a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0504 11:44:56.162328 1723 log.go:172] (0xc00099c840) Data frame received for 3\nI0504 11:44:56.162364 1723 log.go:172] (0xc0000c8000) (3) Data frame handling\nI0504 11:44:56.162475 1723 log.go:172] (0xc0000c8000) (3) Data frame sent\nI0504 11:44:56.162508 1723 log.go:172] (0xc00099c840) Data frame received for 3\nI0504 11:44:56.162535 1723 log.go:172] (0xc0000c8000) (3) Data frame handling\nI0504 11:44:56.162812 1723 log.go:172] (0xc00099c840) Data frame received for 5\nI0504 11:44:56.162834 1723 log.go:172] (0xc0007314a0) (5) Data frame handling\nI0504 11:44:56.164558 1723 log.go:172] (0xc00099c840) Data frame received for 1\nI0504 11:44:56.164592 1723 log.go:172] (0xc000731400) (1) Data frame handling\nI0504 11:44:56.164617 1723 log.go:172] (0xc000731400) (1) Data frame sent\nI0504 11:44:56.164640 1723 log.go:172] (0xc00099c840) (0xc000731400) Stream removed, broadcasting: 1\nI0504 11:44:56.164660 1723 log.go:172] (0xc00099c840) Go away received\nI0504 11:44:56.165419 1723 log.go:172] (0xc00099c840) (0xc000731400) Stream removed, broadcasting: 1\nI0504 11:44:56.165444 1723 log.go:172] (0xc00099c840) (0xc0000c8000) Stream removed, broadcasting: 3\nI0504 11:44:56.165457 1723 log.go:172] (0xc00099c840) (0xc0007314a0) Stream removed, broadcasting: 5\n" May 4 11:44:56.170: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 11:44:56.170: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 11:44:56.170: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5520 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 4 11:44:56.453: INFO: stderr: "I0504 11:44:56.342191 1744 log.go:172] (0xc0000e0c60) (0xc000912140) Create stream\nI0504 11:44:56.342293 1744 log.go:172] (0xc0000e0c60) (0xc000912140) Stream added, broadcasting: 1\nI0504 11:44:56.349994 1744 log.go:172] (0xc0000e0c60) Reply frame received for 1\nI0504 11:44:56.350053 1744 log.go:172] (0xc0000e0c60) (0xc00067d180) Create stream\nI0504 11:44:56.350068 1744 log.go:172] (0xc0000e0c60) (0xc00067d180) Stream added, broadcasting: 3\nI0504 11:44:56.351099 1744 log.go:172] (0xc0000e0c60) Reply frame received for 3\nI0504 11:44:56.351144 1744 log.go:172] (0xc0000e0c60) (0xc0009121e0) Create stream\nI0504 11:44:56.351157 1744 log.go:172] (0xc0000e0c60) (0xc0009121e0) Stream added, broadcasting: 5\nI0504 11:44:56.352017 1744 log.go:172] (0xc0000e0c60) Reply frame received for 5\nI0504 11:44:56.400158 1744 log.go:172] (0xc0000e0c60) Data frame received for 5\nI0504 11:44:56.400204 1744 log.go:172] (0xc0009121e0) (5) Data frame handling\nI0504 11:44:56.400239 1744 log.go:172] (0xc0009121e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0504 11:44:56.445924 1744 log.go:172] (0xc0000e0c60) Data frame received for 3\nI0504 11:44:56.445953 1744 log.go:172] (0xc00067d180) (3) Data frame handling\nI0504 11:44:56.445969 1744 log.go:172] (0xc00067d180) (3) Data frame sent\nI0504 11:44:56.446175 1744 log.go:172] (0xc0000e0c60) Data frame received for 5\nI0504 11:44:56.446191 1744 log.go:172] (0xc0009121e0) (5) Data frame handling\nI0504 11:44:56.446297 1744 log.go:172] (0xc0000e0c60) Data frame received for 3\nI0504 11:44:56.446317 1744 log.go:172] (0xc00067d180) (3) Data frame handling\nI0504 11:44:56.448027 1744 log.go:172] (0xc0000e0c60) Data frame received for 1\nI0504 11:44:56.448047 1744 log.go:172] (0xc000912140) (1) Data frame handling\nI0504 11:44:56.448091 1744 log.go:172] (0xc000912140) (1) Data frame sent\nI0504 11:44:56.448117 1744 log.go:172] (0xc0000e0c60) (0xc000912140) Stream removed, broadcasting: 1\nI0504 11:44:56.448155 1744 log.go:172] (0xc0000e0c60) Go away received\nI0504 11:44:56.448607 1744 log.go:172] (0xc0000e0c60) (0xc000912140) Stream removed, broadcasting: 1\nI0504 11:44:56.448645 1744 log.go:172] (0xc0000e0c60) (0xc00067d180) Stream removed, broadcasting: 3\nI0504 11:44:56.448663 1744 log.go:172] (0xc0000e0c60) (0xc0009121e0) Stream removed, broadcasting: 5\n" May 4 11:44:56.453: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 4 11:44:56.453: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 4 11:44:56.453: INFO: Waiting for statefulset status.replicas updated to 0 May 4 11:44:56.495: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 4 11:45:06.503: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 4 11:45:06.503: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 4 11:45:06.503: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 4 11:45:06.533: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999534s May 4 11:45:07.536: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.978638423s May 4 11:45:08.542: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.975072177s May 4 11:45:09.547: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.970144554s May 4 11:45:10.551: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.965092507s May 4 11:45:11.556: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.960439168s May 4 11:45:12.561: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.955783688s May 4 11:45:13.566: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.950508907s May 4 11:45:14.571: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.945946148s May 4 11:45:15.576: INFO: Verifying statefulset ss doesn't scale past 3 for another 940.914911ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5520 May 4 11:45:16.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5520 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:45:16.818: INFO: stderr: "I0504 11:45:16.717309 1767 log.go:172] (0xc0009520b0) (0xc000444a00) Create stream\nI0504 11:45:16.717393 1767 log.go:172] (0xc0009520b0) (0xc000444a00) Stream added, broadcasting: 1\nI0504 11:45:16.720314 1767 log.go:172] (0xc0009520b0) Reply frame received for 1\nI0504 11:45:16.720347 1767 log.go:172] (0xc0009520b0) (0xc000af0000) Create stream\nI0504 11:45:16.720355 1767 log.go:172] (0xc0009520b0) (0xc000af0000) Stream added, broadcasting: 3\nI0504 11:45:16.721359 1767 log.go:172] (0xc0009520b0) Reply frame received for 3\nI0504 11:45:16.721402 1767 log.go:172] (0xc0009520b0) (0xc000691180) Create stream\nI0504 11:45:16.721417 1767 log.go:172] (0xc0009520b0) (0xc000691180) Stream added, broadcasting: 5\nI0504 11:45:16.722372 1767 log.go:172] (0xc0009520b0) Reply frame received for 5\nI0504 11:45:16.810032 1767 log.go:172] (0xc0009520b0) Data frame received for 3\nI0504 11:45:16.810057 1767 log.go:172] (0xc000af0000) (3) Data frame handling\nI0504 11:45:16.810072 1767 log.go:172] (0xc000af0000) (3) Data frame sent\nI0504 11:45:16.810105 1767 log.go:172] (0xc0009520b0) Data frame received for 5\nI0504 11:45:16.810115 1767 log.go:172] (0xc000691180) (5) Data frame handling\nI0504 11:45:16.810126 1767 log.go:172] (0xc000691180) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0504 11:45:16.810368 1767 log.go:172] (0xc0009520b0) Data frame received for 5\nI0504 11:45:16.810414 1767 log.go:172] (0xc000691180) (5) Data frame handling\nI0504 11:45:16.810496 1767 log.go:172] (0xc0009520b0) Data frame received for 3\nI0504 11:45:16.810518 1767 log.go:172] (0xc000af0000) (3) Data frame handling\nI0504 11:45:16.812213 1767 log.go:172] (0xc0009520b0) Data frame received for 1\nI0504 11:45:16.812226 1767 log.go:172] (0xc000444a00) (1) Data frame handling\nI0504 11:45:16.812239 1767 log.go:172] (0xc000444a00) (1) Data frame sent\nI0504 11:45:16.812340 1767 log.go:172] (0xc0009520b0) (0xc000444a00) Stream removed, broadcasting: 1\nI0504 11:45:16.812904 1767 log.go:172] (0xc0009520b0) (0xc000444a00) Stream removed, broadcasting: 1\nI0504 11:45:16.812938 1767 log.go:172] (0xc0009520b0) (0xc000af0000) Stream removed, broadcasting: 3\nI0504 11:45:16.813330 1767 log.go:172] (0xc0009520b0) (0xc000691180) Stream removed, broadcasting: 5\n" May 4 11:45:16.818: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 11:45:16.818: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 11:45:16.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5520 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:45:17.027: INFO: stderr: "I0504 11:45:16.951177 1787 log.go:172] (0xc0006ed080) (0xc00066e5a0) Create stream\nI0504 11:45:16.951268 1787 log.go:172] (0xc0006ed080) (0xc00066e5a0) Stream added, broadcasting: 1\nI0504 11:45:16.956484 1787 log.go:172] (0xc0006ed080) Reply frame received for 1\nI0504 11:45:16.956546 1787 log.go:172] (0xc0006ed080) (0xc0006775e0) Create stream\nI0504 11:45:16.956558 1787 log.go:172] (0xc0006ed080) (0xc0006775e0) Stream added, broadcasting: 3\nI0504 11:45:16.957666 1787 log.go:172] (0xc0006ed080) Reply frame received for 3\nI0504 11:45:16.957705 1787 log.go:172] (0xc0006ed080) (0xc000526a00) Create stream\nI0504 11:45:16.957722 1787 log.go:172] (0xc0006ed080) (0xc000526a00) Stream added, broadcasting: 5\nI0504 11:45:16.958473 1787 log.go:172] (0xc0006ed080) Reply frame received for 5\nI0504 11:45:17.021243 1787 log.go:172] (0xc0006ed080) Data frame received for 3\nI0504 11:45:17.021281 1787 log.go:172] (0xc0006775e0) (3) Data frame handling\nI0504 11:45:17.021292 1787 log.go:172] (0xc0006775e0) (3) Data frame sent\nI0504 11:45:17.021302 1787 log.go:172] (0xc0006ed080) Data frame received for 3\nI0504 11:45:17.021309 1787 log.go:172] (0xc0006775e0) (3) Data frame handling\nI0504 11:45:17.021350 1787 log.go:172] (0xc0006ed080) Data frame received for 5\nI0504 11:45:17.021381 1787 log.go:172] (0xc000526a00) (5) Data frame handling\nI0504 11:45:17.021402 1787 log.go:172] (0xc000526a00) (5) Data frame sent\nI0504 11:45:17.021417 1787 log.go:172] (0xc0006ed080) Data frame received for 5\nI0504 11:45:17.021430 1787 log.go:172] (0xc000526a00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0504 11:45:17.022604 1787 log.go:172] (0xc0006ed080) Data frame received for 1\nI0504 11:45:17.022626 1787 log.go:172] (0xc00066e5a0) (1) Data frame handling\nI0504 11:45:17.022645 1787 log.go:172] (0xc00066e5a0) (1) Data frame sent\nI0504 11:45:17.022663 1787 log.go:172] (0xc0006ed080) (0xc00066e5a0) Stream removed, broadcasting: 1\nI0504 11:45:17.022681 1787 log.go:172] (0xc0006ed080) Go away received\nI0504 11:45:17.023020 1787 log.go:172] (0xc0006ed080) (0xc00066e5a0) Stream removed, broadcasting: 1\nI0504 11:45:17.023047 1787 log.go:172] (0xc0006ed080) (0xc0006775e0) Stream removed, broadcasting: 3\nI0504 11:45:17.023057 1787 log.go:172] (0xc0006ed080) (0xc000526a00) Stream removed, broadcasting: 5\n" May 4 11:45:17.027: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 11:45:17.027: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 11:45:17.028: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5520 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 4 11:45:17.262: INFO: stderr: "I0504 11:45:17.180185 1807 log.go:172] (0xc000915b80) (0xc0008dc820) Create stream\nI0504 11:45:17.180255 1807 log.go:172] (0xc000915b80) (0xc0008dc820) Stream added, broadcasting: 1\nI0504 11:45:17.185965 1807 log.go:172] (0xc000915b80) Reply frame received for 1\nI0504 11:45:17.186019 1807 log.go:172] (0xc000915b80) (0xc000633720) Create stream\nI0504 11:45:17.186036 1807 log.go:172] (0xc000915b80) (0xc000633720) Stream added, broadcasting: 3\nI0504 11:45:17.187169 1807 log.go:172] (0xc000915b80) Reply frame received for 3\nI0504 11:45:17.187207 1807 log.go:172] (0xc000915b80) (0xc000510b40) Create stream\nI0504 11:45:17.187222 1807 log.go:172] (0xc000915b80) (0xc000510b40) Stream added, broadcasting: 5\nI0504 11:45:17.188240 1807 log.go:172] (0xc000915b80) Reply frame received for 5\nI0504 11:45:17.256527 1807 log.go:172] (0xc000915b80) Data frame received for 5\nI0504 11:45:17.256557 1807 log.go:172] (0xc000510b40) (5) Data frame handling\nI0504 11:45:17.256567 1807 log.go:172] (0xc000510b40) (5) Data frame sent\nI0504 11:45:17.256575 1807 log.go:172] (0xc000915b80) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0504 11:45:17.256600 1807 log.go:172] (0xc000915b80) Data frame received for 3\nI0504 11:45:17.256643 1807 log.go:172] (0xc000633720) (3) Data frame handling\nI0504 11:45:17.256660 1807 log.go:172] (0xc000633720) (3) Data frame sent\nI0504 11:45:17.256676 1807 log.go:172] (0xc000915b80) Data frame received for 3\nI0504 11:45:17.256694 1807 log.go:172] (0xc000633720) (3) Data frame handling\nI0504 11:45:17.256715 1807 log.go:172] (0xc000510b40) (5) Data frame handling\nI0504 11:45:17.258167 1807 log.go:172] (0xc000915b80) Data frame received for 1\nI0504 11:45:17.258182 1807 log.go:172] (0xc0008dc820) (1) Data frame handling\nI0504 11:45:17.258198 1807 log.go:172] (0xc0008dc820) (1) Data frame sent\nI0504 11:45:17.258213 1807 log.go:172] (0xc000915b80) (0xc0008dc820) Stream removed, broadcasting: 1\nI0504 11:45:17.258227 1807 log.go:172] (0xc000915b80) Go away received\nI0504 11:45:17.258517 1807 log.go:172] (0xc000915b80) (0xc0008dc820) Stream removed, broadcasting: 1\nI0504 11:45:17.258536 1807 log.go:172] (0xc000915b80) (0xc000633720) Stream removed, broadcasting: 3\nI0504 11:45:17.258548 1807 log.go:172] (0xc000915b80) (0xc000510b40) Stream removed, broadcasting: 5\n" May 4 11:45:17.262: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 4 11:45:17.262: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 4 11:45:17.262: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 May 4 11:45:37.346: INFO: Deleting all statefulset in ns statefulset-5520 May 4 11:45:37.349: INFO: Scaling statefulset ss to 0 May 4 11:45:37.356: INFO: Waiting for statefulset status.replicas updated to 0 May 4 11:45:37.358: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:45:37.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5520" for this suite. • [SLOW TEST:85.384 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":120,"skipped":2091,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:45:37.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 May 4 11:45:37.517: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 4 11:45:37.570: INFO: Waiting for terminating namespaces to be deleted... May 4 11:45:37.572: INFO: Logging pods the kubelet thinks is on node kali-worker before test May 4 11:45:37.579: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:45:37.579: INFO: Container kube-proxy ready: true, restart count 0 May 4 11:45:37.579: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:45:37.579: INFO: Container kindnet-cni ready: true, restart count 1 May 4 11:45:37.579: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test May 4 11:45:37.600: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:45:37.600: INFO: Container kindnet-cni ready: true, restart count 0 May 4 11:45:37.600: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded) May 4 11:45:37.600: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-43b95b1f-d10a-4052-8ac3-b5b972ec34bd 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-43b95b1f-d10a-4052-8ac3-b5b972ec34bd off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-43b95b1f-d10a-4052-8ac3-b5b972ec34bd [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 May 4 11:45:45.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8990" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:8.352 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":121,"skipped":2097,"failed":0} SSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client May 4 11:45:45.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 May 4 11:45:45.858: INFO: (0) /api/v1/nodes/kali-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  4 11:45:46.346: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  4 11:45:48.376: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189546, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189546, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189546, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189546, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  4 11:45:51.415: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:45:51.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-951" for this suite.
STEP: Destroying namespace "webhook-951-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.001 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":123,"skipped":2123,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:45:51.927: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
May  4 11:45:52.252: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config cluster-info'
May  4 11:45:52.427: INFO: stderr: ""
May  4 11:45:52.427: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32772/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:45:52.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6931" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":124,"skipped":2130,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:45:52.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-2e7535f0-009b-4834-bf87-b141f57b485a
STEP: Creating a pod to test consume configMaps
May  4 11:45:52.557: INFO: Waiting up to 5m0s for pod "pod-configmaps-87c8afad-101c-47bc-b11d-f40e746a9f8d" in namespace "configmap-4367" to be "Succeeded or Failed"
May  4 11:45:52.731: INFO: Pod "pod-configmaps-87c8afad-101c-47bc-b11d-f40e746a9f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 173.637294ms
May  4 11:45:54.735: INFO: Pod "pod-configmaps-87c8afad-101c-47bc-b11d-f40e746a9f8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.177966106s
May  4 11:45:56.740: INFO: Pod "pod-configmaps-87c8afad-101c-47bc-b11d-f40e746a9f8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.182170797s
STEP: Saw pod success
May  4 11:45:56.740: INFO: Pod "pod-configmaps-87c8afad-101c-47bc-b11d-f40e746a9f8d" satisfied condition "Succeeded or Failed"
May  4 11:45:56.743: INFO: Trying to get logs from node kali-worker pod pod-configmaps-87c8afad-101c-47bc-b11d-f40e746a9f8d container configmap-volume-test: 
STEP: delete the pod
May  4 11:45:56.782: INFO: Waiting for pod pod-configmaps-87c8afad-101c-47bc-b11d-f40e746a9f8d to disappear
May  4 11:45:56.813: INFO: Pod pod-configmaps-87c8afad-101c-47bc-b11d-f40e746a9f8d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:45:56.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4367" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":2136,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:45:56.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
May  4 11:45:56.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
May  4 11:46:07.619: INFO: >>> kubeConfig: /root/.kube/config
May  4 11:46:10.555: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:46:21.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7643" for this suite.

• [SLOW TEST:24.453 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":126,"skipped":2209,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:46:21.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  4 11:46:21.385: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0f78795-6185-481c-8561-e8d8a9ebb57a" in namespace "downward-api-2133" to be "Succeeded or Failed"
May  4 11:46:21.413: INFO: Pod "downwardapi-volume-e0f78795-6185-481c-8561-e8d8a9ebb57a": Phase="Pending", Reason="", readiness=false. Elapsed: 27.684858ms
May  4 11:46:23.423: INFO: Pod "downwardapi-volume-e0f78795-6185-481c-8561-e8d8a9ebb57a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037721601s
May  4 11:46:25.556: INFO: Pod "downwardapi-volume-e0f78795-6185-481c-8561-e8d8a9ebb57a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.17087286s
STEP: Saw pod success
May  4 11:46:25.556: INFO: Pod "downwardapi-volume-e0f78795-6185-481c-8561-e8d8a9ebb57a" satisfied condition "Succeeded or Failed"
May  4 11:46:25.559: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-e0f78795-6185-481c-8561-e8d8a9ebb57a container client-container: 
STEP: delete the pod
May  4 11:46:25.803: INFO: Waiting for pod downwardapi-volume-e0f78795-6185-481c-8561-e8d8a9ebb57a to disappear
May  4 11:46:25.807: INFO: Pod downwardapi-volume-e0f78795-6185-481c-8561-e8d8a9ebb57a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:46:25.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2133" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2210,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:46:25.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
May  4 11:46:33.977: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  4 11:46:33.982: INFO: Pod pod-with-poststart-http-hook still exists
May  4 11:46:35.982: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  4 11:46:36.023: INFO: Pod pod-with-poststart-http-hook still exists
May  4 11:46:37.982: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
May  4 11:46:37.987: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:46:37.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3153" for this suite.

• [SLOW TEST:12.182 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2222,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:46:37.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:46:42.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9623" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":129,"skipped":2234,"failed":0}

------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:46:42.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  4 11:46:42.295: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6e0e015c-c311-4b8b-987c-a4ebe080eed7" in namespace "downward-api-5181" to be "Succeeded or Failed"
May  4 11:46:42.520: INFO: Pod "downwardapi-volume-6e0e015c-c311-4b8b-987c-a4ebe080eed7": Phase="Pending", Reason="", readiness=false. Elapsed: 225.284263ms
May  4 11:46:44.525: INFO: Pod "downwardapi-volume-6e0e015c-c311-4b8b-987c-a4ebe080eed7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.229853088s
May  4 11:46:46.529: INFO: Pod "downwardapi-volume-6e0e015c-c311-4b8b-987c-a4ebe080eed7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.234222893s
STEP: Saw pod success
May  4 11:46:46.529: INFO: Pod "downwardapi-volume-6e0e015c-c311-4b8b-987c-a4ebe080eed7" satisfied condition "Succeeded or Failed"
May  4 11:46:46.532: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-6e0e015c-c311-4b8b-987c-a4ebe080eed7 container client-container: 
STEP: delete the pod
May  4 11:46:46.664: INFO: Waiting for pod downwardapi-volume-6e0e015c-c311-4b8b-987c-a4ebe080eed7 to disappear
May  4 11:46:46.725: INFO: Pod downwardapi-volume-6e0e015c-c311-4b8b-987c-a4ebe080eed7 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:46:46.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5181" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":130,"skipped":2234,"failed":0}
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:46:46.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-2007
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  4 11:46:46.816: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  4 11:46:46.943: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  4 11:46:49.012: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  4 11:46:50.964: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  4 11:46:52.947: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  4 11:46:54.946: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  4 11:46:56.948: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  4 11:46:58.948: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  4 11:47:00.948: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  4 11:47:02.947: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  4 11:47:04.947: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  4 11:47:07.624: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  4 11:47:08.948: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  4 11:47:08.955: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  4 11:47:15.009: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.175 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2007 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  4 11:47:15.009: INFO: >>> kubeConfig: /root/.kube/config
I0504 11:47:15.042244       7 log.go:172] (0xc000dd84d0) (0xc002564dc0) Create stream
I0504 11:47:15.042273       7 log.go:172] (0xc000dd84d0) (0xc002564dc0) Stream added, broadcasting: 1
I0504 11:47:15.043778       7 log.go:172] (0xc000dd84d0) Reply frame received for 1
I0504 11:47:15.043824       7 log.go:172] (0xc000dd84d0) (0xc0025650e0) Create stream
I0504 11:47:15.043839       7 log.go:172] (0xc000dd84d0) (0xc0025650e0) Stream added, broadcasting: 3
I0504 11:47:15.044837       7 log.go:172] (0xc000dd84d0) Reply frame received for 3
I0504 11:47:15.044870       7 log.go:172] (0xc000dd84d0) (0xc001b04f00) Create stream
I0504 11:47:15.044881       7 log.go:172] (0xc000dd84d0) (0xc001b04f00) Stream added, broadcasting: 5
I0504 11:47:15.046022       7 log.go:172] (0xc000dd84d0) Reply frame received for 5
I0504 11:47:16.105103       7 log.go:172] (0xc000dd84d0) Data frame received for 5
I0504 11:47:16.105383       7 log.go:172] (0xc001b04f00) (5) Data frame handling
I0504 11:47:16.105435       7 log.go:172] (0xc000dd84d0) Data frame received for 3
I0504 11:47:16.105454       7 log.go:172] (0xc0025650e0) (3) Data frame handling
I0504 11:47:16.105483       7 log.go:172] (0xc0025650e0) (3) Data frame sent
I0504 11:47:16.105498       7 log.go:172] (0xc000dd84d0) Data frame received for 3
I0504 11:47:16.105524       7 log.go:172] (0xc0025650e0) (3) Data frame handling
I0504 11:47:16.107462       7 log.go:172] (0xc000dd84d0) Data frame received for 1
I0504 11:47:16.107489       7 log.go:172] (0xc002564dc0) (1) Data frame handling
I0504 11:47:16.107512       7 log.go:172] (0xc002564dc0) (1) Data frame sent
I0504 11:47:16.107532       7 log.go:172] (0xc000dd84d0) (0xc002564dc0) Stream removed, broadcasting: 1
I0504 11:47:16.107552       7 log.go:172] (0xc000dd84d0) Go away received
I0504 11:47:16.107772       7 log.go:172] (0xc000dd84d0) (0xc002564dc0) Stream removed, broadcasting: 1
I0504 11:47:16.107797       7 log.go:172] (0xc000dd84d0) (0xc0025650e0) Stream removed, broadcasting: 3
I0504 11:47:16.107813       7 log.go:172] (0xc000dd84d0) (0xc001b04f00) Stream removed, broadcasting: 5
May  4 11:47:16.107: INFO: Found all expected endpoints: [netserver-0]
May  4 11:47:16.111: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.106 8081 | grep -v '^\s*$'] Namespace:pod-network-test-2007 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  4 11:47:16.111: INFO: >>> kubeConfig: /root/.kube/config
I0504 11:47:16.138085       7 log.go:172] (0xc00265e370) (0xc001ada640) Create stream
I0504 11:47:16.138113       7 log.go:172] (0xc00265e370) (0xc001ada640) Stream added, broadcasting: 1
I0504 11:47:16.139873       7 log.go:172] (0xc00265e370) Reply frame received for 1
I0504 11:47:16.139904       7 log.go:172] (0xc00265e370) (0xc002565180) Create stream
I0504 11:47:16.139921       7 log.go:172] (0xc00265e370) (0xc002565180) Stream added, broadcasting: 3
I0504 11:47:16.140924       7 log.go:172] (0xc00265e370) Reply frame received for 3
I0504 11:47:16.140980       7 log.go:172] (0xc00265e370) (0xc001ada6e0) Create stream
I0504 11:47:16.140997       7 log.go:172] (0xc00265e370) (0xc001ada6e0) Stream added, broadcasting: 5
I0504 11:47:16.142456       7 log.go:172] (0xc00265e370) Reply frame received for 5
I0504 11:47:17.222583       7 log.go:172] (0xc00265e370) Data frame received for 3
I0504 11:47:17.222626       7 log.go:172] (0xc002565180) (3) Data frame handling
I0504 11:47:17.222639       7 log.go:172] (0xc002565180) (3) Data frame sent
I0504 11:47:17.222655       7 log.go:172] (0xc00265e370) Data frame received for 3
I0504 11:47:17.222664       7 log.go:172] (0xc002565180) (3) Data frame handling
I0504 11:47:17.222691       7 log.go:172] (0xc00265e370) Data frame received for 5
I0504 11:47:17.222703       7 log.go:172] (0xc001ada6e0) (5) Data frame handling
I0504 11:47:17.223904       7 log.go:172] (0xc00265e370) Data frame received for 1
I0504 11:47:17.223943       7 log.go:172] (0xc001ada640) (1) Data frame handling
I0504 11:47:17.223960       7 log.go:172] (0xc001ada640) (1) Data frame sent
I0504 11:47:17.223978       7 log.go:172] (0xc00265e370) (0xc001ada640) Stream removed, broadcasting: 1
I0504 11:47:17.224020       7 log.go:172] (0xc00265e370) Go away received
I0504 11:47:17.224199       7 log.go:172] (0xc00265e370) (0xc001ada640) Stream removed, broadcasting: 1
I0504 11:47:17.224242       7 log.go:172] (0xc00265e370) (0xc002565180) Stream removed, broadcasting: 3
I0504 11:47:17.224261       7 log.go:172] (0xc00265e370) (0xc001ada6e0) Stream removed, broadcasting: 5
May  4 11:47:17.224: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:47:17.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2007" for this suite.

• [SLOW TEST:30.497 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2235,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:47:17.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-3136
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3136
STEP: creating replication controller externalsvc in namespace services-3136
I0504 11:47:17.457341       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3136, replica count: 2
I0504 11:47:20.507719       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0504 11:47:23.507996       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
May  4 11:47:23.874: INFO: Creating new exec pod
May  4 11:47:28.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-3136 execpodhc9nr -- /bin/sh -x -c nslookup nodeport-service'
May  4 11:47:28.282: INFO: stderr: "I0504 11:47:28.198274    1847 log.go:172] (0xc0005a8000) (0xc000a08000) Create stream\nI0504 11:47:28.198347    1847 log.go:172] (0xc0005a8000) (0xc000a08000) Stream added, broadcasting: 1\nI0504 11:47:28.201299    1847 log.go:172] (0xc0005a8000) Reply frame received for 1\nI0504 11:47:28.201349    1847 log.go:172] (0xc0005a8000) (0xc0006eb2c0) Create stream\nI0504 11:47:28.201368    1847 log.go:172] (0xc0005a8000) (0xc0006eb2c0) Stream added, broadcasting: 3\nI0504 11:47:28.202467    1847 log.go:172] (0xc0005a8000) Reply frame received for 3\nI0504 11:47:28.202505    1847 log.go:172] (0xc0005a8000) (0xc000a080a0) Create stream\nI0504 11:47:28.202519    1847 log.go:172] (0xc0005a8000) (0xc000a080a0) Stream added, broadcasting: 5\nI0504 11:47:28.203424    1847 log.go:172] (0xc0005a8000) Reply frame received for 5\nI0504 11:47:28.265669    1847 log.go:172] (0xc0005a8000) Data frame received for 5\nI0504 11:47:28.265709    1847 log.go:172] (0xc000a080a0) (5) Data frame handling\nI0504 11:47:28.265733    1847 log.go:172] (0xc000a080a0) (5) Data frame sent\n+ nslookup nodeport-service\nI0504 11:47:28.273363    1847 log.go:172] (0xc0005a8000) Data frame received for 3\nI0504 11:47:28.273384    1847 log.go:172] (0xc0006eb2c0) (3) Data frame handling\nI0504 11:47:28.273411    1847 log.go:172] (0xc0006eb2c0) (3) Data frame sent\nI0504 11:47:28.274778    1847 log.go:172] (0xc0005a8000) Data frame received for 3\nI0504 11:47:28.274801    1847 log.go:172] (0xc0006eb2c0) (3) Data frame handling\nI0504 11:47:28.274824    1847 log.go:172] (0xc0006eb2c0) (3) Data frame sent\nI0504 11:47:28.275232    1847 log.go:172] (0xc0005a8000) Data frame received for 5\nI0504 11:47:28.275245    1847 log.go:172] (0xc000a080a0) (5) Data frame handling\nI0504 11:47:28.275270    1847 log.go:172] (0xc0005a8000) Data frame received for 3\nI0504 11:47:28.275306    1847 log.go:172] (0xc0006eb2c0) (3) Data frame handling\nI0504 11:47:28.277787    1847 log.go:172] (0xc0005a8000) Data frame received for 1\nI0504 11:47:28.277806    1847 log.go:172] (0xc000a08000) (1) Data frame handling\nI0504 11:47:28.277814    1847 log.go:172] (0xc000a08000) (1) Data frame sent\nI0504 11:47:28.277825    1847 log.go:172] (0xc0005a8000) (0xc000a08000) Stream removed, broadcasting: 1\nI0504 11:47:28.277959    1847 log.go:172] (0xc0005a8000) Go away received\nI0504 11:47:28.278108    1847 log.go:172] (0xc0005a8000) (0xc000a08000) Stream removed, broadcasting: 1\nI0504 11:47:28.278133    1847 log.go:172] (0xc0005a8000) (0xc0006eb2c0) Stream removed, broadcasting: 3\nI0504 11:47:28.278151    1847 log.go:172] (0xc0005a8000) (0xc000a080a0) Stream removed, broadcasting: 5\n"
May  4 11:47:28.282: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-3136.svc.cluster.local\tcanonical name = externalsvc.services-3136.svc.cluster.local.\nName:\texternalsvc.services-3136.svc.cluster.local\nAddress: 10.100.163.176\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3136, will wait for the garbage collector to delete the pods
May  4 11:47:28.343: INFO: Deleting ReplicationController externalsvc took: 7.284297ms
May  4 11:47:28.643: INFO: Terminating ReplicationController externalsvc pods took: 300.256832ms
May  4 11:47:43.847: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:47:43.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3136" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:26.698 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":132,"skipped":2261,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:47:43.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
May  4 11:47:44.038: INFO: Waiting up to 5m0s for pod "var-expansion-3741eb33-691d-4221-a66f-fab7e1558ef0" in namespace "var-expansion-7057" to be "Succeeded or Failed"
May  4 11:47:44.083: INFO: Pod "var-expansion-3741eb33-691d-4221-a66f-fab7e1558ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 45.540202ms
May  4 11:47:46.120: INFO: Pod "var-expansion-3741eb33-691d-4221-a66f-fab7e1558ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082224179s
May  4 11:47:48.124: INFO: Pod "var-expansion-3741eb33-691d-4221-a66f-fab7e1558ef0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086495566s
STEP: Saw pod success
May  4 11:47:48.124: INFO: Pod "var-expansion-3741eb33-691d-4221-a66f-fab7e1558ef0" satisfied condition "Succeeded or Failed"
May  4 11:47:48.127: INFO: Trying to get logs from node kali-worker pod var-expansion-3741eb33-691d-4221-a66f-fab7e1558ef0 container dapi-container: 
STEP: delete the pod
May  4 11:47:48.174: INFO: Waiting for pod var-expansion-3741eb33-691d-4221-a66f-fab7e1558ef0 to disappear
May  4 11:47:48.187: INFO: Pod var-expansion-3741eb33-691d-4221-a66f-fab7e1558ef0 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:47:48.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7057" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":133,"skipped":2278,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:47:48.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-5l9tg in namespace proxy-358
I0504 11:47:48.333771       7 runners.go:190] Created replication controller with name: proxy-service-5l9tg, namespace: proxy-358, replica count: 1
I0504 11:47:49.384331       7 runners.go:190] proxy-service-5l9tg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0504 11:47:50.384673       7 runners.go:190] proxy-service-5l9tg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0504 11:47:51.384865       7 runners.go:190] proxy-service-5l9tg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0504 11:47:52.385359       7 runners.go:190] proxy-service-5l9tg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0504 11:47:53.385607       7 runners.go:190] proxy-service-5l9tg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0504 11:47:54.385809       7 runners.go:190] proxy-service-5l9tg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0504 11:47:55.386110       7 runners.go:190] proxy-service-5l9tg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0504 11:47:56.386327       7 runners.go:190] proxy-service-5l9tg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0504 11:47:57.386545       7 runners.go:190] proxy-service-5l9tg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0504 11:47:58.386785       7 runners.go:190] proxy-service-5l9tg Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  4 11:47:58.391: INFO: setup took 10.111439458s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
May  4 11:47:58.400: INFO: (0) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59/proxy/: test (200; 9.488006ms)
May  4 11:47:58.402: INFO: (0) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname1/proxy/: foo (200; 11.188751ms)
May  4 11:47:58.402: INFO: (0) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname1/proxy/: tls baz (200; 10.910447ms)
May  4 11:47:58.402: INFO: (0) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:460/proxy/: tls baz (200; 11.30296ms)
May  4 11:47:58.402: INFO: (0) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testt... (200; 10.981238ms)
May  4 11:47:58.404: INFO: (0) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname1/proxy/: foo (200; 12.770661ms)
May  4 11:47:58.404: INFO: (0) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 13.26558ms)
May  4 11:47:58.405: INFO: (0) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 13.429678ms)
May  4 11:47:58.405: INFO: (0) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname2/proxy/: bar (200; 13.953796ms)
May  4 11:47:58.405: INFO: (0) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname2/proxy/: bar (200; 14.043744ms)
May  4 11:47:58.405: INFO: (0) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 13.922832ms)
May  4 11:47:58.405: INFO: (0) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 13.896322ms)
May  4 11:47:58.406: INFO: (0) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname2/proxy/: tls qux (200; 13.129046ms)
May  4 11:47:58.406: INFO: (0) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:462/proxy/: tls qux (200; 15.214085ms)
May  4 11:47:58.409: INFO: (0) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:443/proxy/: t... (200; 5.127818ms)
May  4 11:47:58.415: INFO: (1) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testtest (200; 5.724528ms)
May  4 11:47:58.415: INFO: (1) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 5.877659ms)
May  4 11:47:58.415: INFO: (1) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:462/proxy/: tls qux (200; 5.878387ms)
May  4 11:47:58.420: INFO: (2) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:460/proxy/: tls baz (200; 4.066252ms)
May  4 11:47:58.420: INFO: (2) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 4.122804ms)
May  4 11:47:58.420: INFO: (2) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 4.256209ms)
May  4 11:47:58.420: INFO: (2) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:1080/proxy/: t... (200; 4.309984ms)
May  4 11:47:58.420: INFO: (2) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 4.462498ms)
May  4 11:47:58.420: INFO: (2) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testtest (200; 4.593642ms)
May  4 11:47:58.420: INFO: (2) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname2/proxy/: bar (200; 4.661805ms)
May  4 11:47:58.420: INFO: (2) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 4.651428ms)
May  4 11:47:58.421: INFO: (2) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:443/proxy/: test (200; 5.185195ms)
May  4 11:47:58.428: INFO: (3) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:460/proxy/: tls baz (200; 5.214639ms)
May  4 11:47:58.429: INFO: (3) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname2/proxy/: bar (200; 5.632815ms)
May  4 11:47:58.429: INFO: (3) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testt... (200; 6.353626ms)
May  4 11:47:58.430: INFO: (3) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname1/proxy/: tls baz (200; 6.408862ms)
May  4 11:47:58.433: INFO: (4) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:1080/proxy/: t... (200; 3.319407ms)
May  4 11:47:58.433: INFO: (4) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 3.513682ms)
May  4 11:47:58.433: INFO: (4) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testtest (200; 7.494277ms)
May  4 11:47:58.438: INFO: (4) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname2/proxy/: tls qux (200; 7.602668ms)
May  4 11:47:58.438: INFO: (4) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 7.054838ms)
May  4 11:47:58.438: INFO: (4) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname1/proxy/: foo (200; 7.492753ms)
May  4 11:47:58.438: INFO: (4) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:460/proxy/: tls baz (200; 7.092729ms)
May  4 11:47:58.438: INFO: (4) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:443/proxy/: test (200; 13.228487ms)
May  4 11:47:58.452: INFO: (5) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:1080/proxy/: t... (200; 13.323259ms)
May  4 11:47:58.452: INFO: (5) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname1/proxy/: foo (200; 13.366619ms)
May  4 11:47:58.452: INFO: (5) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 13.308725ms)
May  4 11:47:58.452: INFO: (5) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname2/proxy/: bar (200; 13.463708ms)
May  4 11:47:58.452: INFO: (5) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:460/proxy/: tls baz (200; 13.663465ms)
May  4 11:47:58.452: INFO: (5) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testtest (200; 5.601653ms)
May  4 11:47:58.484: INFO: (6) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 5.550759ms)
May  4 11:47:58.488: INFO: (6) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 8.797231ms)
May  4 11:47:58.488: INFO: (6) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:1080/proxy/: t... (200; 8.730037ms)
May  4 11:47:58.488: INFO: (6) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 8.787989ms)
May  4 11:47:58.488: INFO: (6) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testtest (200; 5.265293ms)
May  4 11:47:58.494: INFO: (7) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testt... (200; 5.473712ms)
May  4 11:47:58.498: INFO: (8) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:460/proxy/: tls baz (200; 4.249604ms)
May  4 11:47:58.498: INFO: (8) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 4.2702ms)
May  4 11:47:58.498: INFO: (8) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname2/proxy/: bar (200; 4.312289ms)
May  4 11:47:58.498: INFO: (8) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:1080/proxy/: t... (200; 4.377218ms)
May  4 11:47:58.499: INFO: (8) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59/proxy/: test (200; 4.626131ms)
May  4 11:47:58.499: INFO: (8) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:462/proxy/: tls qux (200; 4.63391ms)
May  4 11:47:58.499: INFO: (8) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 4.721386ms)
May  4 11:47:58.499: INFO: (8) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname1/proxy/: foo (200; 4.645343ms)
May  4 11:47:58.499: INFO: (8) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 4.654479ms)
May  4 11:47:58.499: INFO: (8) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname1/proxy/: tls baz (200; 4.790398ms)
May  4 11:47:58.499: INFO: (8) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname2/proxy/: bar (200; 4.745027ms)
May  4 11:47:58.499: INFO: (8) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 4.767298ms)
May  4 11:47:58.499: INFO: (8) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testtesttest (200; 3.826839ms)
May  4 11:47:58.504: INFO: (9) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname2/proxy/: bar (200; 4.292717ms)
May  4 11:47:58.504: INFO: (9) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname1/proxy/: foo (200; 4.355447ms)
May  4 11:47:58.504: INFO: (9) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname2/proxy/: tls qux (200; 4.363133ms)
May  4 11:47:58.504: INFO: (9) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:462/proxy/: tls qux (200; 4.479336ms)
May  4 11:47:58.504: INFO: (9) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:1080/proxy/: t... (200; 4.435749ms)
May  4 11:47:58.504: INFO: (9) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname1/proxy/: tls baz (200; 4.360349ms)
May  4 11:47:58.504: INFO: (9) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:460/proxy/: tls baz (200; 4.487476ms)
May  4 11:47:58.504: INFO: (9) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 4.411249ms)
May  4 11:47:58.504: INFO: (9) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:443/proxy/: testt... (200; 5.789281ms)
May  4 11:47:58.510: INFO: (10) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 5.832219ms)
May  4 11:47:58.510: INFO: (10) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname1/proxy/: tls baz (200; 5.905072ms)
May  4 11:47:58.510: INFO: (10) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59/proxy/: test (200; 5.938981ms)
May  4 11:47:58.510: INFO: (10) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 5.967165ms)
May  4 11:47:58.514: INFO: (11) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59/proxy/: test (200; 3.889739ms)
May  4 11:47:58.514: INFO: (11) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testt... (200; 3.938511ms)
May  4 11:47:58.514: INFO: (11) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 4.019162ms)
May  4 11:47:58.514: INFO: (11) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 3.994992ms)
May  4 11:47:58.514: INFO: (11) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 4.010286ms)
May  4 11:47:58.514: INFO: (11) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:460/proxy/: tls baz (200; 4.034777ms)
May  4 11:47:58.514: INFO: (11) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 4.130916ms)
May  4 11:47:58.514: INFO: (11) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:462/proxy/: tls qux (200; 4.461048ms)
May  4 11:47:58.514: INFO: (11) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname2/proxy/: bar (200; 4.49357ms)
May  4 11:47:58.514: INFO: (11) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname1/proxy/: foo (200; 4.559856ms)
May  4 11:47:58.515: INFO: (11) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname2/proxy/: bar (200; 4.569291ms)
May  4 11:47:58.515: INFO: (11) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname1/proxy/: tls baz (200; 4.593494ms)
May  4 11:47:58.515: INFO: (11) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname2/proxy/: tls qux (200; 4.613535ms)
May  4 11:47:58.515: INFO: (11) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname1/proxy/: foo (200; 4.68899ms)
May  4 11:47:58.518: INFO: (12) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 3.014052ms)
May  4 11:47:58.518: INFO: (12) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59/proxy/: test (200; 3.193174ms)
May  4 11:47:58.518: INFO: (12) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname1/proxy/: tls baz (200; 3.674433ms)
May  4 11:47:58.518: INFO: (12) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname2/proxy/: bar (200; 3.709535ms)
May  4 11:47:58.519: INFO: (12) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 4.038261ms)
May  4 11:47:58.519: INFO: (12) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testt... (200; 4.443178ms)
May  4 11:47:58.520: INFO: (12) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 5.025339ms)
May  4 11:47:58.520: INFO: (12) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:462/proxy/: tls qux (200; 5.036983ms)
May  4 11:47:58.520: INFO: (12) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname1/proxy/: foo (200; 5.070193ms)
May  4 11:47:58.520: INFO: (12) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:443/proxy/: test (200; 2.944848ms)
May  4 11:47:58.523: INFO: (13) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 2.96624ms)
May  4 11:47:58.525: INFO: (13) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 4.582463ms)
May  4 11:47:58.525: INFO: (13) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 4.618582ms)
May  4 11:47:58.525: INFO: (13) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testt... (200; 4.850609ms)
May  4 11:47:58.525: INFO: (13) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:462/proxy/: tls qux (200; 4.853054ms)
May  4 11:47:58.526: INFO: (13) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname1/proxy/: tls baz (200; 5.811027ms)
May  4 11:47:58.526: INFO: (13) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname2/proxy/: bar (200; 5.920125ms)
May  4 11:47:58.526: INFO: (13) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname2/proxy/: bar (200; 6.034189ms)
May  4 11:47:58.526: INFO: (13) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname1/proxy/: foo (200; 6.037683ms)
May  4 11:47:58.526: INFO: (13) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname1/proxy/: foo (200; 6.128317ms)
May  4 11:47:58.526: INFO: (13) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname2/proxy/: tls qux (200; 6.073778ms)
May  4 11:47:58.530: INFO: (14) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testtest (200; 5.878527ms)
May  4 11:47:58.532: INFO: (14) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 5.920001ms)
May  4 11:47:58.532: INFO: (14) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:443/proxy/: t... (200; 6.039386ms)
May  4 11:47:58.532: INFO: (14) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname1/proxy/: tls baz (200; 6.02475ms)
May  4 11:47:58.532: INFO: (14) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname2/proxy/: tls qux (200; 6.120636ms)
May  4 11:47:58.532: INFO: (14) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 6.067508ms)
May  4 11:47:58.532: INFO: (14) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname1/proxy/: foo (200; 6.120497ms)
May  4 11:47:58.532: INFO: (14) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:460/proxy/: tls baz (200; 6.109022ms)
May  4 11:47:58.535: INFO: (15) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 2.60651ms)
May  4 11:47:58.535: INFO: (15) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:462/proxy/: tls qux (200; 2.872155ms)
May  4 11:47:58.535: INFO: (15) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:460/proxy/: tls baz (200; 2.780902ms)
May  4 11:47:58.535: INFO: (15) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testt... (200; 4.594621ms)
May  4 11:47:58.538: INFO: (15) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59/proxy/: test (200; 5.01952ms)
May  4 11:47:58.538: INFO: (15) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname2/proxy/: bar (200; 5.153038ms)
May  4 11:47:58.538: INFO: (15) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname2/proxy/: tls qux (200; 5.350508ms)
May  4 11:47:58.538: INFO: (15) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:443/proxy/: testt... (200; 4.858474ms)
May  4 11:47:58.543: INFO: (16) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 4.919827ms)
May  4 11:47:58.543: INFO: (16) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59/proxy/: test (200; 4.988705ms)
May  4 11:47:58.543: INFO: (16) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:460/proxy/: tls baz (200; 5.151978ms)
May  4 11:47:58.543: INFO: (16) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 5.183271ms)
May  4 11:47:58.543: INFO: (16) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:462/proxy/: tls qux (200; 5.114471ms)
May  4 11:47:58.543: INFO: (16) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 5.209426ms)
May  4 11:47:58.543: INFO: (16) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 5.204261ms)
May  4 11:47:58.544: INFO: (16) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:443/proxy/: t... (200; 3.780068ms)
May  4 11:47:58.548: INFO: (17) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname1/proxy/: foo (200; 3.819293ms)
May  4 11:47:58.548: INFO: (17) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 3.933485ms)
May  4 11:47:58.548: INFO: (17) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname1/proxy/: foo (200; 3.919853ms)
May  4 11:47:58.548: INFO: (17) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname2/proxy/: tls qux (200; 3.865429ms)
May  4 11:47:58.549: INFO: (17) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:460/proxy/: tls baz (200; 4.703688ms)
May  4 11:47:58.549: INFO: (17) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59/proxy/: test (200; 4.713105ms)
May  4 11:47:58.549: INFO: (17) /api/v1/namespaces/proxy-358/services/http:proxy-service-5l9tg:portname2/proxy/: bar (200; 4.820797ms)
May  4 11:47:58.549: INFO: (17) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname2/proxy/: bar (200; 4.789334ms)
May  4 11:47:58.549: INFO: (17) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 5.043855ms)
May  4 11:47:58.549: INFO: (17) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:462/proxy/: tls qux (200; 4.839983ms)
May  4 11:47:58.549: INFO: (17) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 4.846801ms)
May  4 11:47:58.549: INFO: (17) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testtest (200; 3.021341ms)
May  4 11:47:58.552: INFO: (18) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:460/proxy/: tls baz (200; 3.039485ms)
May  4 11:47:58.553: INFO: (18) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 3.5504ms)
May  4 11:47:58.553: INFO: (18) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 3.658836ms)
May  4 11:47:58.553: INFO: (18) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:1080/proxy/: testt... (200; 3.959886ms)
May  4 11:47:58.553: INFO: (18) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 3.967561ms)
May  4 11:47:58.553: INFO: (18) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:443/proxy/: test (200; 2.661424ms)
May  4 11:47:58.558: INFO: (19) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 3.700316ms)
May  4 11:47:58.558: INFO: (19) /api/v1/namespaces/proxy-358/pods/http:proxy-service-5l9tg-lzw59:160/proxy/: foo (200; 3.675736ms)
May  4 11:47:58.558: INFO: (19) /api/v1/namespaces/proxy-358/pods/https:proxy-service-5l9tg-lzw59:443/proxy/: testt... (200; 5.445426ms)
May  4 11:47:58.559: INFO: (19) /api/v1/namespaces/proxy-358/pods/proxy-service-5l9tg-lzw59:162/proxy/: bar (200; 5.473113ms)
May  4 11:47:58.560: INFO: (19) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname1/proxy/: tls baz (200; 5.526592ms)
May  4 11:47:58.560: INFO: (19) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname2/proxy/: bar (200; 5.68531ms)
May  4 11:47:58.560: INFO: (19) /api/v1/namespaces/proxy-358/services/https:proxy-service-5l9tg:tlsportname2/proxy/: tls qux (200; 5.884758ms)
May  4 11:47:58.560: INFO: (19) /api/v1/namespaces/proxy-358/services/proxy-service-5l9tg:portname1/proxy/: foo (200; 6.181412ms)
STEP: deleting ReplicationController proxy-service-5l9tg in namespace proxy-358, will wait for the garbage collector to delete the pods
May  4 11:47:58.619: INFO: Deleting ReplicationController proxy-service-5l9tg took: 6.603492ms
May  4 11:47:58.719: INFO: Terminating ReplicationController proxy-service-5l9tg pods took: 100.20108ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:48:03.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-358" for this suite.

• [SLOW TEST:15.656 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":275,"completed":134,"skipped":2335,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:48:03.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5939.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-5939.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5939.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-5939.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5939.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-5939.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-5939.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-5939.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-5939.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5939.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  4 11:48:10.198: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:10.201: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:10.204: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:10.206: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:10.291: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:10.294: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:10.297: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:10.300: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:10.306: INFO: Lookups using dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5939.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5939.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local jessie_udp@dns-test-service-2.dns-5939.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5939.svc.cluster.local]

May  4 11:48:15.311: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:15.315: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:15.319: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:15.323: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:15.334: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:15.337: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:15.340: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:15.343: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:15.350: INFO: Lookups using dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5939.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5939.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local jessie_udp@dns-test-service-2.dns-5939.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5939.svc.cluster.local]

May  4 11:48:20.310: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:20.313: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:20.316: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:20.319: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:20.328: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:20.332: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:20.339: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:20.342: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:20.347: INFO: Lookups using dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5939.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5939.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local jessie_udp@dns-test-service-2.dns-5939.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5939.svc.cluster.local]

May  4 11:48:25.311: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:25.315: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:25.318: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:25.322: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:25.332: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:25.334: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:25.338: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:25.340: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:25.347: INFO: Lookups using dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5939.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5939.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local jessie_udp@dns-test-service-2.dns-5939.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5939.svc.cluster.local]

May  4 11:48:30.311: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:30.315: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:30.319: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:30.322: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:30.331: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:30.334: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:30.338: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:30.340: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:30.346: INFO: Lookups using dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5939.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5939.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local jessie_udp@dns-test-service-2.dns-5939.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5939.svc.cluster.local]

May  4 11:48:35.311: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:35.315: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:35.319: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:35.322: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:35.332: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:35.336: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:35.339: INFO: Unable to read jessie_udp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:35.343: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-5939.svc.cluster.local from pod dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00: the server could not find the requested resource (get pods dns-test-214ea05f-f783-4f40-920a-253417d69f00)
May  4 11:48:35.350: INFO: Lookups using dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local wheezy_udp@dns-test-service-2.dns-5939.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-5939.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-5939.svc.cluster.local jessie_udp@dns-test-service-2.dns-5939.svc.cluster.local jessie_tcp@dns-test-service-2.dns-5939.svc.cluster.local]

May  4 11:48:40.348: INFO: DNS probes using dns-5939/dns-test-214ea05f-f783-4f40-920a-253417d69f00 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:48:41.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5939" for this suite.

• [SLOW TEST:37.172 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":135,"skipped":2347,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:48:41.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-c197e56a-743d-4853-8125-138fc5e16271
STEP: Creating a pod to test consume secrets
May  4 11:48:41.171: INFO: Waiting up to 5m0s for pod "pod-secrets-f877f18a-87d5-44a9-89c2-55c2ca6fa923" in namespace "secrets-9948" to be "Succeeded or Failed"
May  4 11:48:41.176: INFO: Pod "pod-secrets-f877f18a-87d5-44a9-89c2-55c2ca6fa923": Phase="Pending", Reason="", readiness=false. Elapsed: 5.424773ms
May  4 11:48:43.183: INFO: Pod "pod-secrets-f877f18a-87d5-44a9-89c2-55c2ca6fa923": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012537854s
May  4 11:48:45.210: INFO: Pod "pod-secrets-f877f18a-87d5-44a9-89c2-55c2ca6fa923": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038731295s
STEP: Saw pod success
May  4 11:48:45.210: INFO: Pod "pod-secrets-f877f18a-87d5-44a9-89c2-55c2ca6fa923" satisfied condition "Succeeded or Failed"
May  4 11:48:45.213: INFO: Trying to get logs from node kali-worker pod pod-secrets-f877f18a-87d5-44a9-89c2-55c2ca6fa923 container secret-volume-test: 
STEP: delete the pod
May  4 11:48:45.301: INFO: Waiting for pod pod-secrets-f877f18a-87d5-44a9-89c2-55c2ca6fa923 to disappear
May  4 11:48:45.304: INFO: Pod pod-secrets-f877f18a-87d5-44a9-89c2-55c2ca6fa923 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:48:45.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9948" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2361,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:48:45.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:48:49.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1964" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2373,"failed":0}
SSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:48:49.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:48:49.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-8349" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":138,"skipped":2376,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:48:49.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-94d84eda-df6f-4f96-af36-3456e2ac0312
STEP: Creating secret with name secret-projected-all-test-volume-3f3e8805-5e9b-4c15-9ad2-580a9a1235b7
STEP: Creating a pod to test Check all projections for projected volume plugin
May  4 11:48:49.658: INFO: Waiting up to 5m0s for pod "projected-volume-f2f23926-ed83-4b5c-99b3-21bede2707fd" in namespace "projected-6672" to be "Succeeded or Failed"
May  4 11:48:49.668: INFO: Pod "projected-volume-f2f23926-ed83-4b5c-99b3-21bede2707fd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.01468ms
May  4 11:48:51.671: INFO: Pod "projected-volume-f2f23926-ed83-4b5c-99b3-21bede2707fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012844109s
May  4 11:48:53.676: INFO: Pod "projected-volume-f2f23926-ed83-4b5c-99b3-21bede2707fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017322869s
STEP: Saw pod success
May  4 11:48:53.676: INFO: Pod "projected-volume-f2f23926-ed83-4b5c-99b3-21bede2707fd" satisfied condition "Succeeded or Failed"
May  4 11:48:53.679: INFO: Trying to get logs from node kali-worker2 pod projected-volume-f2f23926-ed83-4b5c-99b3-21bede2707fd container projected-all-volume-test: 
STEP: delete the pod
May  4 11:48:53.745: INFO: Waiting for pod projected-volume-f2f23926-ed83-4b5c-99b3-21bede2707fd to disappear
May  4 11:48:53.764: INFO: Pod projected-volume-f2f23926-ed83-4b5c-99b3-21bede2707fd no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:48:53.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6672" for this suite.
•{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2397,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:48:53.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:48:53.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6629" for this suite.
STEP: Destroying namespace "nspatchtest-9bbd2c19-9671-436b-a8ab-1667ecb14156-7028" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":140,"skipped":2418,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:48:53.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
May  4 11:48:54.015: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-277'
May  4 11:48:54.473: INFO: stderr: ""
May  4 11:48:54.473: INFO: stdout: "pod/pause created\n"
May  4 11:48:54.473: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
May  4 11:48:54.473: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-277" to be "running and ready"
May  4 11:48:54.513: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 39.63574ms
May  4 11:48:56.517: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043906407s
May  4 11:48:58.522: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.048776353s
May  4 11:48:58.522: INFO: Pod "pause" satisfied condition "running and ready"
May  4 11:48:58.522: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
May  4 11:48:58.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-277'
May  4 11:48:58.633: INFO: stderr: ""
May  4 11:48:58.633: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
May  4 11:48:58.633: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-277'
May  4 11:48:58.731: INFO: stderr: ""
May  4 11:48:58.731: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
May  4 11:48:58.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-277'
May  4 11:48:58.836: INFO: stderr: ""
May  4 11:48:58.836: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
May  4 11:48:58.836: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-277'
May  4 11:48:58.927: INFO: stderr: ""
May  4 11:48:58.927: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
May  4 11:48:58.927: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-277'
May  4 11:48:59.059: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  4 11:48:59.059: INFO: stdout: "pod \"pause\" force deleted\n"
May  4 11:48:59.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-277'
May  4 11:48:59.280: INFO: stderr: "No resources found in kubectl-277 namespace.\n"
May  4 11:48:59.280: INFO: stdout: ""
May  4 11:48:59.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-277 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May  4 11:48:59.387: INFO: stderr: ""
May  4 11:48:59.387: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:48:59.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-277" for this suite.

• [SLOW TEST:5.453 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":141,"skipped":2419,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:48:59.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:48:59.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-7524" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":142,"skipped":2426,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:48:59.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-b47bc721-d4db-4a16-9222-2b9a85c0399a
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:48:59.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9724" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":143,"skipped":2469,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:49:00.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 11:49:00.264: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b11e62ed-ab7d-436b-8cb2-249770c028b3" in namespace "security-context-test-6118" to be "Succeeded or Failed"
May  4 11:49:00.430: INFO: Pod "busybox-readonly-false-b11e62ed-ab7d-436b-8cb2-249770c028b3": Phase="Pending", Reason="", readiness=false. Elapsed: 166.264992ms
May  4 11:49:02.434: INFO: Pod "busybox-readonly-false-b11e62ed-ab7d-436b-8cb2-249770c028b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170542126s
May  4 11:49:04.441: INFO: Pod "busybox-readonly-false-b11e62ed-ab7d-436b-8cb2-249770c028b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.176993926s
May  4 11:49:04.441: INFO: Pod "busybox-readonly-false-b11e62ed-ab7d-436b-8cb2-249770c028b3" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:49:04.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6118" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":144,"skipped":2499,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:49:04.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 pods, got 2 pods
STEP: expected 0 rs, got 1 rs
STEP: Gathering metrics
W0504 11:49:05.288249       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  4 11:49:05.288: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:49:05.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2960" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":145,"skipped":2512,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:49:05.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
May  4 11:49:05.397: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:49:05.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3288" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":146,"skipped":2518,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:49:05.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:49:12.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9707" for this suite.

• [SLOW TEST:7.215 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":147,"skipped":2527,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:49:12.707: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-0a0514d9-128b-4e85-888e-e09917c161d3
STEP: Creating a pod to test consume secrets
May  4 11:49:12.834: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3cbb3926-ce16-42da-ab4a-4924e2669af7" in namespace "projected-4901" to be "Succeeded or Failed"
May  4 11:49:12.881: INFO: Pod "pod-projected-secrets-3cbb3926-ce16-42da-ab4a-4924e2669af7": Phase="Pending", Reason="", readiness=false. Elapsed: 46.997094ms
May  4 11:49:14.885: INFO: Pod "pod-projected-secrets-3cbb3926-ce16-42da-ab4a-4924e2669af7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050978741s
May  4 11:49:16.889: INFO: Pod "pod-projected-secrets-3cbb3926-ce16-42da-ab4a-4924e2669af7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05544921s
STEP: Saw pod success
May  4 11:49:16.889: INFO: Pod "pod-projected-secrets-3cbb3926-ce16-42da-ab4a-4924e2669af7" satisfied condition "Succeeded or Failed"
May  4 11:49:16.892: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-3cbb3926-ce16-42da-ab4a-4924e2669af7 container projected-secret-volume-test: 
STEP: delete the pod
May  4 11:49:17.051: INFO: Waiting for pod pod-projected-secrets-3cbb3926-ce16-42da-ab4a-4924e2669af7 to disappear
May  4 11:49:17.058: INFO: Pod pod-projected-secrets-3cbb3926-ce16-42da-ab4a-4924e2669af7 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:49:17.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4901" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2530,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:49:17.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 11:49:17.141: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-48abb6f9-d811-4b2d-81dc-3091ad75662e in namespace container-probe-5999
May  4 11:49:21.368: INFO: Started pod liveness-48abb6f9-d811-4b2d-81dc-3091ad75662e in namespace container-probe-5999
STEP: checking the pod's current state and verifying that restartCount is present
May  4 11:49:21.370: INFO: Initial restart count of pod liveness-48abb6f9-d811-4b2d-81dc-3091ad75662e is 0
May  4 11:49:37.420: INFO: Restart count of pod container-probe-5999/liveness-48abb6f9-d811-4b2d-81dc-3091ad75662e is now 1 (16.049062582s elapsed)
May  4 11:49:57.463: INFO: Restart count of pod container-probe-5999/liveness-48abb6f9-d811-4b2d-81dc-3091ad75662e is now 2 (36.092000766s elapsed)
May  4 11:50:17.505: INFO: Restart count of pod container-probe-5999/liveness-48abb6f9-d811-4b2d-81dc-3091ad75662e is now 3 (56.13457832s elapsed)
May  4 11:50:37.666: INFO: Restart count of pod container-probe-5999/liveness-48abb6f9-d811-4b2d-81dc-3091ad75662e is now 4 (1m16.29571339s elapsed)
May  4 11:51:45.845: INFO: Restart count of pod container-probe-5999/liveness-48abb6f9-d811-4b2d-81dc-3091ad75662e is now 5 (2m24.474373972s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:51:45.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5999" for this suite.

• [SLOW TEST:148.655 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":150,"skipped":2567,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:51:45.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0504 11:51:55.996925       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  4 11:51:55.996: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:51:55.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4292" for this suite.

• [SLOW TEST:10.112 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":151,"skipped":2578,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:51:56.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
May  4 11:51:56.075: INFO: Waiting up to 5m0s for pod "pod-a5b7b317-85fe-4513-b2cb-1cae2326d180" in namespace "emptydir-7632" to be "Succeeded or Failed"
May  4 11:51:56.127: INFO: Pod "pod-a5b7b317-85fe-4513-b2cb-1cae2326d180": Phase="Pending", Reason="", readiness=false. Elapsed: 52.222692ms
May  4 11:51:58.132: INFO: Pod "pod-a5b7b317-85fe-4513-b2cb-1cae2326d180": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056728245s
May  4 11:52:00.136: INFO: Pod "pod-a5b7b317-85fe-4513-b2cb-1cae2326d180": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061195353s
STEP: Saw pod success
May  4 11:52:00.137: INFO: Pod "pod-a5b7b317-85fe-4513-b2cb-1cae2326d180" satisfied condition "Succeeded or Failed"
May  4 11:52:00.140: INFO: Trying to get logs from node kali-worker2 pod pod-a5b7b317-85fe-4513-b2cb-1cae2326d180 container test-container: 
STEP: delete the pod
May  4 11:52:00.175: INFO: Waiting for pod pod-a5b7b317-85fe-4513-b2cb-1cae2326d180 to disappear
May  4 11:52:00.211: INFO: Pod pod-a5b7b317-85fe-4513-b2cb-1cae2326d180 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:52:00.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7632" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2593,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:52:00.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
May  4 11:52:00.272: INFO: >>> kubeConfig: /root/.kube/config
May  4 11:52:02.253: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:52:12.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2857" for this suite.

• [SLOW TEST:12.745 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":153,"skipped":2597,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:52:12.963: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:52:17.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8285" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2652,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:52:17.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  4 11:52:17.546: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  4 11:52:19.554: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189937, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189937, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189937, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724189937, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  4 11:52:22.577: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
May  4 11:52:22.599: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:52:22.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-612" for this suite.
STEP: Destroying namespace "webhook-612-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.687 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":155,"skipped":2665,"failed":0}
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:52:22.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-1871
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-1871
STEP: creating replication controller externalsvc in namespace services-1871
I0504 11:52:22.916578       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1871, replica count: 2
I0504 11:52:25.967085       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0504 11:52:28.967385       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
May  4 11:52:29.038: INFO: Creating new exec pod
May  4 11:52:33.140: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-1871 execpodp2mks -- /bin/sh -x -c nslookup clusterip-service'
May  4 11:52:33.389: INFO: stderr: "I0504 11:52:33.269728    2052 log.go:172] (0xc00020e790) (0xc0009b0460) Create stream\nI0504 11:52:33.269797    2052 log.go:172] (0xc00020e790) (0xc0009b0460) Stream added, broadcasting: 1\nI0504 11:52:33.272755    2052 log.go:172] (0xc00020e790) Reply frame received for 1\nI0504 11:52:33.272806    2052 log.go:172] (0xc00020e790) (0xc0009b0500) Create stream\nI0504 11:52:33.272822    2052 log.go:172] (0xc00020e790) (0xc0009b0500) Stream added, broadcasting: 3\nI0504 11:52:33.274198    2052 log.go:172] (0xc00020e790) Reply frame received for 3\nI0504 11:52:33.274243    2052 log.go:172] (0xc00020e790) (0xc0005ad540) Create stream\nI0504 11:52:33.274260    2052 log.go:172] (0xc00020e790) (0xc0005ad540) Stream added, broadcasting: 5\nI0504 11:52:33.275199    2052 log.go:172] (0xc00020e790) Reply frame received for 5\nI0504 11:52:33.376625    2052 log.go:172] (0xc00020e790) Data frame received for 5\nI0504 11:52:33.376657    2052 log.go:172] (0xc0005ad540) (5) Data frame handling\nI0504 11:52:33.376680    2052 log.go:172] (0xc0005ad540) (5) Data frame sent\n+ nslookup clusterip-service\nI0504 11:52:33.380824    2052 log.go:172] (0xc00020e790) Data frame received for 3\nI0504 11:52:33.380845    2052 log.go:172] (0xc0009b0500) (3) Data frame handling\nI0504 11:52:33.380866    2052 log.go:172] (0xc0009b0500) (3) Data frame sent\nI0504 11:52:33.382127    2052 log.go:172] (0xc00020e790) Data frame received for 3\nI0504 11:52:33.382155    2052 log.go:172] (0xc0009b0500) (3) Data frame handling\nI0504 11:52:33.382176    2052 log.go:172] (0xc0009b0500) (3) Data frame sent\nI0504 11:52:33.382636    2052 log.go:172] (0xc00020e790) Data frame received for 5\nI0504 11:52:33.382659    2052 log.go:172] (0xc0005ad540) (5) Data frame handling\nI0504 11:52:33.382956    2052 log.go:172] (0xc00020e790) Data frame received for 3\nI0504 11:52:33.382969    2052 log.go:172] (0xc0009b0500) (3) Data frame handling\nI0504 11:52:33.384433    2052 log.go:172] (0xc00020e790) Data frame received for 1\nI0504 11:52:33.384459    2052 log.go:172] (0xc0009b0460) (1) Data frame handling\nI0504 11:52:33.384474    2052 log.go:172] (0xc0009b0460) (1) Data frame sent\nI0504 11:52:33.384498    2052 log.go:172] (0xc00020e790) (0xc0009b0460) Stream removed, broadcasting: 1\nI0504 11:52:33.384532    2052 log.go:172] (0xc00020e790) Go away received\nI0504 11:52:33.384862    2052 log.go:172] (0xc00020e790) (0xc0009b0460) Stream removed, broadcasting: 1\nI0504 11:52:33.384878    2052 log.go:172] (0xc00020e790) (0xc0009b0500) Stream removed, broadcasting: 3\nI0504 11:52:33.384891    2052 log.go:172] (0xc00020e790) (0xc0005ad540) Stream removed, broadcasting: 5\n"
May  4 11:52:33.389: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-1871.svc.cluster.local\tcanonical name = externalsvc.services-1871.svc.cluster.local.\nName:\texternalsvc.services-1871.svc.cluster.local\nAddress: 10.96.160.6\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-1871, will wait for the garbage collector to delete the pods
May  4 11:52:33.450: INFO: Deleting ReplicationController externalsvc took: 7.378974ms
May  4 11:52:33.550: INFO: Terminating ReplicationController externalsvc pods took: 100.214449ms
May  4 11:52:38.090: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:52:38.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1871" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:15.422 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":156,"skipped":2665,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:52:38.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
May  4 11:52:48.333: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-605 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  4 11:52:48.333: INFO: >>> kubeConfig: /root/.kube/config
I0504 11:52:48.368279       7 log.go:172] (0xc004792210) (0xc0029d5400) Create stream
I0504 11:52:48.368308       7 log.go:172] (0xc004792210) (0xc0029d5400) Stream added, broadcasting: 1
I0504 11:52:48.370699       7 log.go:172] (0xc004792210) Reply frame received for 1
I0504 11:52:48.370761       7 log.go:172] (0xc004792210) (0xc0029d5540) Create stream
I0504 11:52:48.370779       7 log.go:172] (0xc004792210) (0xc0029d5540) Stream added, broadcasting: 3
I0504 11:52:48.371766       7 log.go:172] (0xc004792210) Reply frame received for 3
I0504 11:52:48.371825       7 log.go:172] (0xc004792210) (0xc0017b8dc0) Create stream
I0504 11:52:48.371840       7 log.go:172] (0xc004792210) (0xc0017b8dc0) Stream added, broadcasting: 5
I0504 11:52:48.372540       7 log.go:172] (0xc004792210) Reply frame received for 5
I0504 11:52:48.428741       7 log.go:172] (0xc004792210) Data frame received for 5
I0504 11:52:48.428767       7 log.go:172] (0xc0017b8dc0) (5) Data frame handling
I0504 11:52:48.428785       7 log.go:172] (0xc004792210) Data frame received for 3
I0504 11:52:48.428791       7 log.go:172] (0xc0029d5540) (3) Data frame handling
I0504 11:52:48.428806       7 log.go:172] (0xc0029d5540) (3) Data frame sent
I0504 11:52:48.428811       7 log.go:172] (0xc004792210) Data frame received for 3
I0504 11:52:48.428830       7 log.go:172] (0xc0029d5540) (3) Data frame handling
I0504 11:52:48.430364       7 log.go:172] (0xc004792210) Data frame received for 1
I0504 11:52:48.430398       7 log.go:172] (0xc0029d5400) (1) Data frame handling
I0504 11:52:48.430474       7 log.go:172] (0xc0029d5400) (1) Data frame sent
I0504 11:52:48.430505       7 log.go:172] (0xc004792210) (0xc0029d5400) Stream removed, broadcasting: 1
I0504 11:52:48.430539       7 log.go:172] (0xc004792210) Go away received
I0504 11:52:48.430605       7 log.go:172] (0xc004792210) (0xc0029d5400) Stream removed, broadcasting: 1
I0504 11:52:48.430624       7 log.go:172] (0xc004792210) (0xc0029d5540) Stream removed, broadcasting: 3
I0504 11:52:48.430651       7 log.go:172] (0xc004792210) (0xc0017b8dc0) Stream removed, broadcasting: 5
May  4 11:52:48.430: INFO: Exec stderr: ""
May  4 11:52:48.430: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-605 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  4 11:52:48.430: INFO: >>> kubeConfig: /root/.kube/config
I0504 11:52:48.465926       7 log.go:172] (0xc004792840) (0xc0029d5a40) Create stream
I0504 11:52:48.465948       7 log.go:172] (0xc004792840) (0xc0029d5a40) Stream added, broadcasting: 1
I0504 11:52:48.468550       7 log.go:172] (0xc004792840) Reply frame received for 1
I0504 11:52:48.468622       7 log.go:172] (0xc004792840) (0xc000f57900) Create stream
I0504 11:52:48.468647       7 log.go:172] (0xc004792840) (0xc000f57900) Stream added, broadcasting: 3
I0504 11:52:48.470051       7 log.go:172] (0xc004792840) Reply frame received for 3
I0504 11:52:48.470112       7 log.go:172] (0xc004792840) (0xc0029d5ae0) Create stream
I0504 11:52:48.470140       7 log.go:172] (0xc004792840) (0xc0029d5ae0) Stream added, broadcasting: 5
I0504 11:52:48.471242       7 log.go:172] (0xc004792840) Reply frame received for 5
I0504 11:52:48.545944       7 log.go:172] (0xc004792840) Data frame received for 3
I0504 11:52:48.546037       7 log.go:172] (0xc000f57900) (3) Data frame handling
I0504 11:52:48.546059       7 log.go:172] (0xc000f57900) (3) Data frame sent
I0504 11:52:48.546070       7 log.go:172] (0xc004792840) Data frame received for 3
I0504 11:52:48.546086       7 log.go:172] (0xc000f57900) (3) Data frame handling
I0504 11:52:48.546116       7 log.go:172] (0xc004792840) Data frame received for 5
I0504 11:52:48.546155       7 log.go:172] (0xc0029d5ae0) (5) Data frame handling
I0504 11:52:48.547706       7 log.go:172] (0xc004792840) Data frame received for 1
I0504 11:52:48.547756       7 log.go:172] (0xc0029d5a40) (1) Data frame handling
I0504 11:52:48.547781       7 log.go:172] (0xc0029d5a40) (1) Data frame sent
I0504 11:52:48.547797       7 log.go:172] (0xc004792840) (0xc0029d5a40) Stream removed, broadcasting: 1
I0504 11:52:48.547857       7 log.go:172] (0xc004792840) Go away received
I0504 11:52:48.548132       7 log.go:172] (0xc004792840) (0xc0029d5a40) Stream removed, broadcasting: 1
I0504 11:52:48.548176       7 log.go:172] (0xc004792840) (0xc000f57900) Stream removed, broadcasting: 3
I0504 11:52:48.548198       7 log.go:172] (0xc004792840) (0xc0029d5ae0) Stream removed, broadcasting: 5
May  4 11:52:48.548: INFO: Exec stderr: ""
May  4 11:52:48.548: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-605 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  4 11:52:48.548: INFO: >>> kubeConfig: /root/.kube/config
I0504 11:52:48.579946       7 log.go:172] (0xc004792e70) (0xc0029d5f40) Create stream
I0504 11:52:48.579974       7 log.go:172] (0xc004792e70) (0xc0029d5f40) Stream added, broadcasting: 1
I0504 11:52:48.582956       7 log.go:172] (0xc004792e70) Reply frame received for 1
I0504 11:52:48.582992       7 log.go:172] (0xc004792e70) (0xc001f4c000) Create stream
I0504 11:52:48.583009       7 log.go:172] (0xc004792e70) (0xc001f4c000) Stream added, broadcasting: 3
I0504 11:52:48.584076       7 log.go:172] (0xc004792e70) Reply frame received for 3
I0504 11:52:48.584115       7 log.go:172] (0xc004792e70) (0xc000eb5040) Create stream
I0504 11:52:48.584130       7 log.go:172] (0xc004792e70) (0xc000eb5040) Stream added, broadcasting: 5
I0504 11:52:48.585064       7 log.go:172] (0xc004792e70) Reply frame received for 5
I0504 11:52:48.653704       7 log.go:172] (0xc004792e70) Data frame received for 5
I0504 11:52:48.653739       7 log.go:172] (0xc000eb5040) (5) Data frame handling
I0504 11:52:48.653763       7 log.go:172] (0xc004792e70) Data frame received for 3
I0504 11:52:48.653775       7 log.go:172] (0xc001f4c000) (3) Data frame handling
I0504 11:52:48.653786       7 log.go:172] (0xc001f4c000) (3) Data frame sent
I0504 11:52:48.653795       7 log.go:172] (0xc004792e70) Data frame received for 3
I0504 11:52:48.653803       7 log.go:172] (0xc001f4c000) (3) Data frame handling
I0504 11:52:48.655296       7 log.go:172] (0xc004792e70) Data frame received for 1
I0504 11:52:48.655318       7 log.go:172] (0xc0029d5f40) (1) Data frame handling
I0504 11:52:48.655327       7 log.go:172] (0xc0029d5f40) (1) Data frame sent
I0504 11:52:48.655335       7 log.go:172] (0xc004792e70) (0xc0029d5f40) Stream removed, broadcasting: 1
I0504 11:52:48.655406       7 log.go:172] (0xc004792e70) (0xc0029d5f40) Stream removed, broadcasting: 1
I0504 11:52:48.655423       7 log.go:172] (0xc004792e70) (0xc001f4c000) Stream removed, broadcasting: 3
I0504 11:52:48.655502       7 log.go:172] (0xc004792e70) Go away received
I0504 11:52:48.655585       7 log.go:172] (0xc004792e70) (0xc000eb5040) Stream removed, broadcasting: 5
May  4 11:52:48.655: INFO: Exec stderr: ""
May  4 11:52:48.655: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-605 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  4 11:52:48.655: INFO: >>> kubeConfig: /root/.kube/config
I0504 11:52:48.688287       7 log.go:172] (0xc00453e420) (0xc00258e000) Create stream
I0504 11:52:48.688531       7 log.go:172] (0xc00453e420) (0xc00258e000) Stream added, broadcasting: 1
I0504 11:52:48.698642       7 log.go:172] (0xc00453e420) Reply frame received for 1
I0504 11:52:48.698700       7 log.go:172] (0xc00453e420) (0xc00176ff40) Create stream
I0504 11:52:48.698718       7 log.go:172] (0xc00453e420) (0xc00176ff40) Stream added, broadcasting: 3
I0504 11:52:48.700659       7 log.go:172] (0xc00453e420) Reply frame received for 3
I0504 11:52:48.700700       7 log.go:172] (0xc00453e420) (0xc002a9a140) Create stream
I0504 11:52:48.700716       7 log.go:172] (0xc00453e420) (0xc002a9a140) Stream added, broadcasting: 5
I0504 11:52:48.703494       7 log.go:172] (0xc00453e420) Reply frame received for 5
I0504 11:52:48.756779       7 log.go:172] (0xc00453e420) Data frame received for 3
I0504 11:52:48.756802       7 log.go:172] (0xc00176ff40) (3) Data frame handling
I0504 11:52:48.756809       7 log.go:172] (0xc00176ff40) (3) Data frame sent
I0504 11:52:48.756813       7 log.go:172] (0xc00453e420) Data frame received for 3
I0504 11:52:48.756818       7 log.go:172] (0xc00176ff40) (3) Data frame handling
I0504 11:52:48.756860       7 log.go:172] (0xc00453e420) Data frame received for 5
I0504 11:52:48.756868       7 log.go:172] (0xc002a9a140) (5) Data frame handling
I0504 11:52:48.758268       7 log.go:172] (0xc00453e420) Data frame received for 1
I0504 11:52:48.758290       7 log.go:172] (0xc00258e000) (1) Data frame handling
I0504 11:52:48.758304       7 log.go:172] (0xc00258e000) (1) Data frame sent
I0504 11:52:48.758322       7 log.go:172] (0xc00453e420) (0xc00258e000) Stream removed, broadcasting: 1
I0504 11:52:48.758344       7 log.go:172] (0xc00453e420) Go away received
I0504 11:52:48.758597       7 log.go:172] (0xc00453e420) (0xc00258e000) Stream removed, broadcasting: 1
I0504 11:52:48.758615       7 log.go:172] (0xc00453e420) (0xc00176ff40) Stream removed, broadcasting: 3
I0504 11:52:48.758637       7 log.go:172] (0xc00453e420) (0xc002a9a140) Stream removed, broadcasting: 5
May  4 11:52:48.758: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
May  4 11:52:48.758: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-605 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  4 11:52:48.758: INFO: >>> kubeConfig: /root/.kube/config
I0504 11:52:48.789273       7 log.go:172] (0xc0049f4370) (0xc002a9a500) Create stream
I0504 11:52:48.789295       7 log.go:172] (0xc0049f4370) (0xc002a9a500) Stream added, broadcasting: 1
I0504 11:52:48.791884       7 log.go:172] (0xc0049f4370) Reply frame received for 1
I0504 11:52:48.791918       7 log.go:172] (0xc0049f4370) (0xc001fc6000) Create stream
I0504 11:52:48.791926       7 log.go:172] (0xc0049f4370) (0xc001fc6000) Stream added, broadcasting: 3
I0504 11:52:48.793086       7 log.go:172] (0xc0049f4370) Reply frame received for 3
I0504 11:52:48.793222       7 log.go:172] (0xc0049f4370) (0xc002a9a640) Create stream
I0504 11:52:48.793244       7 log.go:172] (0xc0049f4370) (0xc002a9a640) Stream added, broadcasting: 5
I0504 11:52:48.794128       7 log.go:172] (0xc0049f4370) Reply frame received for 5
I0504 11:52:48.853274       7 log.go:172] (0xc0049f4370) Data frame received for 3
I0504 11:52:48.853300       7 log.go:172] (0xc001fc6000) (3) Data frame handling
I0504 11:52:48.853308       7 log.go:172] (0xc001fc6000) (3) Data frame sent
I0504 11:52:48.853313       7 log.go:172] (0xc0049f4370) Data frame received for 3
I0504 11:52:48.853317       7 log.go:172] (0xc001fc6000) (3) Data frame handling
I0504 11:52:48.853339       7 log.go:172] (0xc0049f4370) Data frame received for 5
I0504 11:52:48.853346       7 log.go:172] (0xc002a9a640) (5) Data frame handling
I0504 11:52:48.854978       7 log.go:172] (0xc0049f4370) Data frame received for 1
I0504 11:52:48.854995       7 log.go:172] (0xc002a9a500) (1) Data frame handling
I0504 11:52:48.855011       7 log.go:172] (0xc002a9a500) (1) Data frame sent
I0504 11:52:48.855023       7 log.go:172] (0xc0049f4370) (0xc002a9a500) Stream removed, broadcasting: 1
I0504 11:52:48.855035       7 log.go:172] (0xc0049f4370) Go away received
I0504 11:52:48.855117       7 log.go:172] (0xc0049f4370) (0xc002a9a500) Stream removed, broadcasting: 1
I0504 11:52:48.855133       7 log.go:172] (0xc0049f4370) (0xc001fc6000) Stream removed, broadcasting: 3
I0504 11:52:48.855140       7 log.go:172] (0xc0049f4370) (0xc002a9a640) Stream removed, broadcasting: 5
May  4 11:52:48.855: INFO: Exec stderr: ""
May  4 11:52:48.855: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-605 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  4 11:52:48.855: INFO: >>> kubeConfig: /root/.kube/config
I0504 11:52:48.879420       7 log.go:172] (0xc003bf69a0) (0xc001fc6460) Create stream
I0504 11:52:48.879442       7 log.go:172] (0xc003bf69a0) (0xc001fc6460) Stream added, broadcasting: 1
I0504 11:52:48.890255       7 log.go:172] (0xc003bf69a0) Reply frame received for 1
I0504 11:52:48.890319       7 log.go:172] (0xc003bf69a0) (0xc002a9a0a0) Create stream
I0504 11:52:48.890328       7 log.go:172] (0xc003bf69a0) (0xc002a9a0a0) Stream added, broadcasting: 3
I0504 11:52:48.891044       7 log.go:172] (0xc003bf69a0) Reply frame received for 3
I0504 11:52:48.891071       7 log.go:172] (0xc003bf69a0) (0xc001fc6000) Create stream
I0504 11:52:48.891081       7 log.go:172] (0xc003bf69a0) (0xc001fc6000) Stream added, broadcasting: 5
I0504 11:52:48.891749       7 log.go:172] (0xc003bf69a0) Reply frame received for 5
I0504 11:52:48.939989       7 log.go:172] (0xc003bf69a0) Data frame received for 5
I0504 11:52:48.940016       7 log.go:172] (0xc001fc6000) (5) Data frame handling
I0504 11:52:48.940033       7 log.go:172] (0xc003bf69a0) Data frame received for 3
I0504 11:52:48.940039       7 log.go:172] (0xc002a9a0a0) (3) Data frame handling
I0504 11:52:48.940051       7 log.go:172] (0xc002a9a0a0) (3) Data frame sent
I0504 11:52:48.940064       7 log.go:172] (0xc003bf69a0) Data frame received for 3
I0504 11:52:48.940068       7 log.go:172] (0xc002a9a0a0) (3) Data frame handling
I0504 11:52:48.941955       7 log.go:172] (0xc003bf69a0) Data frame received for 1
I0504 11:52:48.942080       7 log.go:172] (0xc001fc6460) (1) Data frame handling
I0504 11:52:48.942108       7 log.go:172] (0xc001fc6460) (1) Data frame sent
I0504 11:52:48.942125       7 log.go:172] (0xc003bf69a0) (0xc001fc6460) Stream removed, broadcasting: 1
I0504 11:52:48.942156       7 log.go:172] (0xc003bf69a0) Go away received
I0504 11:52:48.942214       7 log.go:172] (0xc003bf69a0) (0xc001fc6460) Stream removed, broadcasting: 1
I0504 11:52:48.942230       7 log.go:172] (0xc003bf69a0) (0xc002a9a0a0) Stream removed, broadcasting: 3
I0504 11:52:48.942239       7 log.go:172] (0xc003bf69a0) (0xc001fc6000) Stream removed, broadcasting: 5
May  4 11:52:48.942: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
May  4 11:52:48.942: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-605 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  4 11:52:48.942: INFO: >>> kubeConfig: /root/.kube/config
I0504 11:52:48.979489       7 log.go:172] (0xc001e60420) (0xc001fc6640) Create stream
I0504 11:52:48.979527       7 log.go:172] (0xc001e60420) (0xc001fc6640) Stream added, broadcasting: 1
I0504 11:52:48.981478       7 log.go:172] (0xc001e60420) Reply frame received for 1
I0504 11:52:48.981510       7 log.go:172] (0xc001e60420) (0xc00176e000) Create stream
I0504 11:52:48.981522       7 log.go:172] (0xc001e60420) (0xc00176e000) Stream added, broadcasting: 3
I0504 11:52:48.982218       7 log.go:172] (0xc001e60420) Reply frame received for 3
I0504 11:52:48.982276       7 log.go:172] (0xc001e60420) (0xc0029d4460) Create stream
I0504 11:52:48.982303       7 log.go:172] (0xc001e60420) (0xc0029d4460) Stream added, broadcasting: 5
I0504 11:52:48.982871       7 log.go:172] (0xc001e60420) Reply frame received for 5
I0504 11:52:49.028124       7 log.go:172] (0xc001e60420) Data frame received for 5
I0504 11:52:49.028185       7 log.go:172] (0xc0029d4460) (5) Data frame handling
I0504 11:52:49.028253       7 log.go:172] (0xc001e60420) Data frame received for 3
I0504 11:52:49.028302       7 log.go:172] (0xc00176e000) (3) Data frame handling
I0504 11:52:49.028358       7 log.go:172] (0xc00176e000) (3) Data frame sent
I0504 11:52:49.028388       7 log.go:172] (0xc001e60420) Data frame received for 3
I0504 11:52:49.028413       7 log.go:172] (0xc00176e000) (3) Data frame handling
I0504 11:52:49.030524       7 log.go:172] (0xc001e60420) Data frame received for 1
I0504 11:52:49.030550       7 log.go:172] (0xc001fc6640) (1) Data frame handling
I0504 11:52:49.030566       7 log.go:172] (0xc001fc6640) (1) Data frame sent
I0504 11:52:49.030832       7 log.go:172] (0xc001e60420) (0xc001fc6640) Stream removed, broadcasting: 1
I0504 11:52:49.030876       7 log.go:172] (0xc001e60420) Go away received
I0504 11:52:49.030968       7 log.go:172] (0xc001e60420) (0xc001fc6640) Stream removed, broadcasting: 1
I0504 11:52:49.031003       7 log.go:172] (0xc001e60420) (0xc00176e000) Stream removed, broadcasting: 3
I0504 11:52:49.031034       7 log.go:172] (0xc001e60420) (0xc0029d4460) Stream removed, broadcasting: 5
May  4 11:52:49.031: INFO: Exec stderr: ""
May  4 11:52:49.031: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-605 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  4 11:52:49.031: INFO: >>> kubeConfig: /root/.kube/config
I0504 11:52:49.064849       7 log.go:172] (0xc001e3e370) (0xc00176e5a0) Create stream
I0504 11:52:49.064879       7 log.go:172] (0xc001e3e370) (0xc00176e5a0) Stream added, broadcasting: 1
I0504 11:52:49.066654       7 log.go:172] (0xc001e3e370) Reply frame received for 1
I0504 11:52:49.066682       7 log.go:172] (0xc001e3e370) (0xc000a3a0a0) Create stream
I0504 11:52:49.066692       7 log.go:172] (0xc001e3e370) (0xc000a3a0a0) Stream added, broadcasting: 3
I0504 11:52:49.067317       7 log.go:172] (0xc001e3e370) Reply frame received for 3
I0504 11:52:49.067338       7 log.go:172] (0xc001e3e370) (0xc001fc66e0) Create stream
I0504 11:52:49.067345       7 log.go:172] (0xc001e3e370) (0xc001fc66e0) Stream added, broadcasting: 5
I0504 11:52:49.067900       7 log.go:172] (0xc001e3e370) Reply frame received for 5
I0504 11:52:49.129419       7 log.go:172] (0xc001e3e370) Data frame received for 5
I0504 11:52:49.129492       7 log.go:172] (0xc001fc66e0) (5) Data frame handling
I0504 11:52:49.129531       7 log.go:172] (0xc001e3e370) Data frame received for 3
I0504 11:52:49.129546       7 log.go:172] (0xc000a3a0a0) (3) Data frame handling
I0504 11:52:49.129568       7 log.go:172] (0xc000a3a0a0) (3) Data frame sent
I0504 11:52:49.129581       7 log.go:172] (0xc001e3e370) Data frame received for 3
I0504 11:52:49.129594       7 log.go:172] (0xc000a3a0a0) (3) Data frame handling
I0504 11:52:49.131077       7 log.go:172] (0xc001e3e370) Data frame received for 1
I0504 11:52:49.131133       7 log.go:172] (0xc00176e5a0) (1) Data frame handling
I0504 11:52:49.131160       7 log.go:172] (0xc00176e5a0) (1) Data frame sent
I0504 11:52:49.131368       7 log.go:172] (0xc001e3e370) (0xc00176e5a0) Stream removed, broadcasting: 1
I0504 11:52:49.131463       7 log.go:172] (0xc001e3e370) Go away received
I0504 11:52:49.131485       7 log.go:172] (0xc001e3e370) (0xc00176e5a0) Stream removed, broadcasting: 1
I0504 11:52:49.131499       7 log.go:172] (0xc001e3e370) (0xc000a3a0a0) Stream removed, broadcasting: 3
I0504 11:52:49.131507       7 log.go:172] (0xc001e3e370) (0xc001fc66e0) Stream removed, broadcasting: 5
May  4 11:52:49.131: INFO: Exec stderr: ""
May  4 11:52:49.131: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-605 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  4 11:52:49.131: INFO: >>> kubeConfig: /root/.kube/config
I0504 11:52:49.168285       7 log.go:172] (0xc001e3e8f0) (0xc00176e780) Create stream
I0504 11:52:49.168333       7 log.go:172] (0xc001e3e8f0) (0xc00176e780) Stream added, broadcasting: 1
I0504 11:52:49.170414       7 log.go:172] (0xc001e3e8f0) Reply frame received for 1
I0504 11:52:49.170476       7 log.go:172] (0xc001e3e8f0) (0xc002a9a140) Create stream
I0504 11:52:49.170503       7 log.go:172] (0xc001e3e8f0) (0xc002a9a140) Stream added, broadcasting: 3
I0504 11:52:49.171603       7 log.go:172] (0xc001e3e8f0) Reply frame received for 3
I0504 11:52:49.171636       7 log.go:172] (0xc001e3e8f0) (0xc002a9a1e0) Create stream
I0504 11:52:49.171647       7 log.go:172] (0xc001e3e8f0) (0xc002a9a1e0) Stream added, broadcasting: 5
I0504 11:52:49.172497       7 log.go:172] (0xc001e3e8f0) Reply frame received for 5
I0504 11:52:49.235715       7 log.go:172] (0xc001e3e8f0) Data frame received for 5
I0504 11:52:49.235769       7 log.go:172] (0xc002a9a1e0) (5) Data frame handling
I0504 11:52:49.235809       7 log.go:172] (0xc001e3e8f0) Data frame received for 3
I0504 11:52:49.235825       7 log.go:172] (0xc002a9a140) (3) Data frame handling
I0504 11:52:49.235838       7 log.go:172] (0xc002a9a140) (3) Data frame sent
I0504 11:52:49.235857       7 log.go:172] (0xc001e3e8f0) Data frame received for 3
I0504 11:52:49.235871       7 log.go:172] (0xc002a9a140) (3) Data frame handling
I0504 11:52:49.237510       7 log.go:172] (0xc001e3e8f0) Data frame received for 1
I0504 11:52:49.237534       7 log.go:172] (0xc00176e780) (1) Data frame handling
I0504 11:52:49.237544       7 log.go:172] (0xc00176e780) (1) Data frame sent
I0504 11:52:49.237557       7 log.go:172] (0xc001e3e8f0) (0xc00176e780) Stream removed, broadcasting: 1
I0504 11:52:49.237573       7 log.go:172] (0xc001e3e8f0) Go away received
I0504 11:52:49.237660       7 log.go:172] (0xc001e3e8f0) (0xc00176e780) Stream removed, broadcasting: 1
I0504 11:52:49.237690       7 log.go:172] (0xc001e3e8f0) (0xc002a9a140) Stream removed, broadcasting: 3
I0504 11:52:49.237702       7 log.go:172] (0xc001e3e8f0) (0xc002a9a1e0) Stream removed, broadcasting: 5
May  4 11:52:49.237: INFO: Exec stderr: ""
May  4 11:52:49.237: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-605 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  4 11:52:49.237: INFO: >>> kubeConfig: /root/.kube/config
I0504 11:52:49.285589       7 log.go:172] (0xc001e3f080) (0xc00176ef00) Create stream
I0504 11:52:49.285620       7 log.go:172] (0xc001e3f080) (0xc00176ef00) Stream added, broadcasting: 1
I0504 11:52:49.287827       7 log.go:172] (0xc001e3f080) Reply frame received for 1
I0504 11:52:49.287878       7 log.go:172] (0xc001e3f080) (0xc000a3a320) Create stream
I0504 11:52:49.287890       7 log.go:172] (0xc001e3f080) (0xc000a3a320) Stream added, broadcasting: 3
I0504 11:52:49.288949       7 log.go:172] (0xc001e3f080) Reply frame received for 3
I0504 11:52:49.288991       7 log.go:172] (0xc001e3f080) (0xc001fc6820) Create stream
I0504 11:52:49.289004       7 log.go:172] (0xc001e3f080) (0xc001fc6820) Stream added, broadcasting: 5
I0504 11:52:49.290109       7 log.go:172] (0xc001e3f080) Reply frame received for 5
I0504 11:52:49.348378       7 log.go:172] (0xc001e3f080) Data frame received for 3
I0504 11:52:49.348407       7 log.go:172] (0xc000a3a320) (3) Data frame handling
I0504 11:52:49.348414       7 log.go:172] (0xc000a3a320) (3) Data frame sent
I0504 11:52:49.348449       7 log.go:172] (0xc001e3f080) Data frame received for 5
I0504 11:52:49.348482       7 log.go:172] (0xc001fc6820) (5) Data frame handling
I0504 11:52:49.348518       7 log.go:172] (0xc001e3f080) Data frame received for 3
I0504 11:52:49.348539       7 log.go:172] (0xc000a3a320) (3) Data frame handling
I0504 11:52:49.350194       7 log.go:172] (0xc001e3f080) Data frame received for 1
I0504 11:52:49.350210       7 log.go:172] (0xc00176ef00) (1) Data frame handling
I0504 11:52:49.350229       7 log.go:172] (0xc00176ef00) (1) Data frame sent
I0504 11:52:49.350358       7 log.go:172] (0xc001e3f080) (0xc00176ef00) Stream removed, broadcasting: 1
I0504 11:52:49.350447       7 log.go:172] (0xc001e3f080) (0xc00176ef00) Stream removed, broadcasting: 1
I0504 11:52:49.350465       7 log.go:172] (0xc001e3f080) (0xc000a3a320) Stream removed, broadcasting: 3
I0504 11:52:49.350715       7 log.go:172] (0xc001e3f080) (0xc001fc6820) Stream removed, broadcasting: 5
May  4 11:52:49.350: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:52:49.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
I0504 11:52:49.350933       7 log.go:172] (0xc001e3f080) Go away received
STEP: Destroying namespace "e2e-kubelet-etc-hosts-605" for this suite.

• [SLOW TEST:11.197 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2678,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:52:49.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May  4 11:52:53.996: INFO: Successfully updated pod "annotationupdateb645b62e-75af-4ca6-9ed2-ea56a99ac80d"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:52:56.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4466" for this suite.

• [SLOW TEST:6.683 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2702,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:52:56.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 11:52:56.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
May  4 11:52:56.812: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-04T11:52:56Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-04T11:52:56Z]] name:name1 resourceVersion:1431900 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:8fa7b881-dc35-44c4-ad3b-c5f91a179a0c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
May  4 11:53:06.820: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-04T11:53:06Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-04T11:53:06Z]] name:name2 resourceVersion:1431943 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ed7e27fe-bd4a-4d42-85ee-5a15158bbae6] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
May  4 11:53:16.835: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-04T11:52:56Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-04T11:53:16Z]] name:name1 resourceVersion:1431973 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:8fa7b881-dc35-44c4-ad3b-c5f91a179a0c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
May  4 11:53:26.842: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-04T11:53:06Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-04T11:53:26Z]] name:name2 resourceVersion:1432002 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ed7e27fe-bd4a-4d42-85ee-5a15158bbae6] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
May  4 11:53:36.850: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-04T11:52:56Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-04T11:53:16Z]] name:name1 resourceVersion:1432040 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:8fa7b881-dc35-44c4-ad3b-c5f91a179a0c] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
May  4 11:53:46.858: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-04T11:53:06Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-05-04T11:53:26Z]] name:name2 resourceVersion:1432070 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ed7e27fe-bd4a-4d42-85ee-5a15158bbae6] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:53:57.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-2224" for this suite.

• [SLOW TEST:61.359 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":159,"skipped":2716,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:53:57.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
May  4 11:53:57.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-992'
May  4 11:53:57.771: INFO: stderr: ""
May  4 11:53:57.771: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May  4 11:53:57.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-992'
May  4 11:53:57.901: INFO: stderr: ""
May  4 11:53:57.901: INFO: stdout: "update-demo-nautilus-m24p4 update-demo-nautilus-rd6nx "
May  4 11:53:57.901: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m24p4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-992'
May  4 11:53:58.017: INFO: stderr: ""
May  4 11:53:58.017: INFO: stdout: ""
May  4 11:53:58.017: INFO: update-demo-nautilus-m24p4 is created but not running
May  4 11:54:03.017: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-992'
May  4 11:54:03.116: INFO: stderr: ""
May  4 11:54:03.117: INFO: stdout: "update-demo-nautilus-m24p4 update-demo-nautilus-rd6nx "
May  4 11:54:03.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m24p4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-992'
May  4 11:54:03.211: INFO: stderr: ""
May  4 11:54:03.211: INFO: stdout: "true"
May  4 11:54:03.211: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m24p4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-992'
May  4 11:54:03.305: INFO: stderr: ""
May  4 11:54:03.305: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  4 11:54:03.305: INFO: validating pod update-demo-nautilus-m24p4
May  4 11:54:03.310: INFO: got data: {
  "image": "nautilus.jpg"
}

May  4 11:54:03.310: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  4 11:54:03.310: INFO: update-demo-nautilus-m24p4 is verified up and running
May  4 11:54:03.310: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rd6nx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-992'
May  4 11:54:03.416: INFO: stderr: ""
May  4 11:54:03.416: INFO: stdout: "true"
May  4 11:54:03.416: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rd6nx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-992'
May  4 11:54:03.514: INFO: stderr: ""
May  4 11:54:03.514: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  4 11:54:03.514: INFO: validating pod update-demo-nautilus-rd6nx
May  4 11:54:03.520: INFO: got data: {
  "image": "nautilus.jpg"
}

May  4 11:54:03.520: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  4 11:54:03.520: INFO: update-demo-nautilus-rd6nx is verified up and running
STEP: scaling down the replication controller
May  4 11:54:03.522: INFO: scanned /root for discovery docs: 
May  4 11:54:03.522: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-992'
May  4 11:54:04.628: INFO: stderr: ""
May  4 11:54:04.628: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May  4 11:54:04.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-992'
May  4 11:54:04.730: INFO: stderr: ""
May  4 11:54:04.730: INFO: stdout: "update-demo-nautilus-m24p4 update-demo-nautilus-rd6nx "
STEP: Replicas for name=update-demo: expected=1 actual=2
May  4 11:54:09.731: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-992'
May  4 11:54:09.854: INFO: stderr: ""
May  4 11:54:09.854: INFO: stdout: "update-demo-nautilus-m24p4 "
May  4 11:54:09.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m24p4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-992'
May  4 11:54:09.988: INFO: stderr: ""
May  4 11:54:09.988: INFO: stdout: "true"
May  4 11:54:09.988: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m24p4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-992'
May  4 11:54:10.143: INFO: stderr: ""
May  4 11:54:10.143: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  4 11:54:10.143: INFO: validating pod update-demo-nautilus-m24p4
May  4 11:54:10.147: INFO: got data: {
  "image": "nautilus.jpg"
}

May  4 11:54:10.147: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  4 11:54:10.147: INFO: update-demo-nautilus-m24p4 is verified up and running
STEP: scaling up the replication controller
May  4 11:54:10.150: INFO: scanned /root for discovery docs: 
May  4 11:54:10.150: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-992'
May  4 11:54:11.304: INFO: stderr: ""
May  4 11:54:11.304: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May  4 11:54:11.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-992'
May  4 11:54:11.422: INFO: stderr: ""
May  4 11:54:11.422: INFO: stdout: "update-demo-nautilus-m24p4 update-demo-nautilus-qbx95 "
May  4 11:54:11.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m24p4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-992'
May  4 11:54:11.510: INFO: stderr: ""
May  4 11:54:11.510: INFO: stdout: "true"
May  4 11:54:11.511: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m24p4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-992'
May  4 11:54:11.606: INFO: stderr: ""
May  4 11:54:11.606: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  4 11:54:11.606: INFO: validating pod update-demo-nautilus-m24p4
May  4 11:54:11.674: INFO: got data: {
  "image": "nautilus.jpg"
}

May  4 11:54:11.674: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  4 11:54:11.674: INFO: update-demo-nautilus-m24p4 is verified up and running
May  4 11:54:11.674: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qbx95 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-992'
May  4 11:54:11.768: INFO: stderr: ""
May  4 11:54:11.769: INFO: stdout: ""
May  4 11:54:11.769: INFO: update-demo-nautilus-qbx95 is created but not running
May  4 11:54:16.769: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-992'
May  4 11:54:16.874: INFO: stderr: ""
May  4 11:54:16.874: INFO: stdout: "update-demo-nautilus-m24p4 update-demo-nautilus-qbx95 "
May  4 11:54:16.874: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m24p4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-992'
May  4 11:54:16.964: INFO: stderr: ""
May  4 11:54:16.964: INFO: stdout: "true"
May  4 11:54:16.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m24p4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-992'
May  4 11:54:17.061: INFO: stderr: ""
May  4 11:54:17.061: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  4 11:54:17.061: INFO: validating pod update-demo-nautilus-m24p4
May  4 11:54:17.065: INFO: got data: {
  "image": "nautilus.jpg"
}

May  4 11:54:17.065: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  4 11:54:17.065: INFO: update-demo-nautilus-m24p4 is verified up and running
May  4 11:54:17.065: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qbx95 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-992'
May  4 11:54:17.153: INFO: stderr: ""
May  4 11:54:17.153: INFO: stdout: "true"
May  4 11:54:17.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-qbx95 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-992'
May  4 11:54:17.245: INFO: stderr: ""
May  4 11:54:17.245: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  4 11:54:17.245: INFO: validating pod update-demo-nautilus-qbx95
May  4 11:54:17.249: INFO: got data: {
  "image": "nautilus.jpg"
}

May  4 11:54:17.249: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  4 11:54:17.249: INFO: update-demo-nautilus-qbx95 is verified up and running
STEP: using delete to clean up resources
May  4 11:54:17.249: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-992'
May  4 11:54:17.344: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  4 11:54:17.344: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
May  4 11:54:17.344: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-992'
May  4 11:54:17.436: INFO: stderr: "No resources found in kubectl-992 namespace.\n"
May  4 11:54:17.436: INFO: stdout: ""
May  4 11:54:17.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-992 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May  4 11:54:17.523: INFO: stderr: ""
May  4 11:54:17.523: INFO: stdout: "update-demo-nautilus-m24p4\nupdate-demo-nautilus-qbx95\n"
May  4 11:54:18.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-992'
May  4 11:54:18.133: INFO: stderr: "No resources found in kubectl-992 namespace.\n"
May  4 11:54:18.133: INFO: stdout: ""
May  4 11:54:18.133: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-992 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May  4 11:54:18.239: INFO: stderr: ""
May  4 11:54:18.239: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:54:18.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-992" for this suite.

• [SLOW TEST:20.844 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":275,"completed":160,"skipped":2733,"failed":0}
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:54:18.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:54:23.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-365" for this suite.

• [SLOW TEST:5.377 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":161,"skipped":2733,"failed":0}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:54:23.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:54:23.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5645" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":162,"skipped":2733,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:54:23.770: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  4 11:54:24.393: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  4 11:54:26.404: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190064, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190064, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190064, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190064, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  4 11:54:29.439: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 11:54:29.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4126-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:54:31.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9377" for this suite.
STEP: Destroying namespace "webhook-9377-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.335 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":163,"skipped":2738,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:54:31.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May  4 11:54:36.431: INFO: Successfully updated pod "annotationupdatef2465439-83d8-48bb-a53d-2272ae2029a0"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:54:40.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9300" for this suite.

• [SLOW TEST:9.370 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":164,"skipped":2771,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:54:40.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May  4 11:54:40.566: INFO: Waiting up to 5m0s for pod "downward-api-89c48106-6442-4c2d-92c9-7e6a69a7665e" in namespace "downward-api-8146" to be "Succeeded or Failed"
May  4 11:54:40.583: INFO: Pod "downward-api-89c48106-6442-4c2d-92c9-7e6a69a7665e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.081115ms
May  4 11:54:42.587: INFO: Pod "downward-api-89c48106-6442-4c2d-92c9-7e6a69a7665e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020904582s
May  4 11:54:44.603: INFO: Pod "downward-api-89c48106-6442-4c2d-92c9-7e6a69a7665e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036786336s
STEP: Saw pod success
May  4 11:54:44.603: INFO: Pod "downward-api-89c48106-6442-4c2d-92c9-7e6a69a7665e" satisfied condition "Succeeded or Failed"
May  4 11:54:44.606: INFO: Trying to get logs from node kali-worker pod downward-api-89c48106-6442-4c2d-92c9-7e6a69a7665e container dapi-container: 
STEP: delete the pod
May  4 11:54:44.717: INFO: Waiting for pod downward-api-89c48106-6442-4c2d-92c9-7e6a69a7665e to disappear
May  4 11:54:44.744: INFO: Pod downward-api-89c48106-6442-4c2d-92c9-7e6a69a7665e no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:54:44.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8146" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2792,"failed":0}
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:54:44.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
May  4 11:54:44.824: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
May  4 11:54:44.845: INFO: Waiting for terminating namespaces to be deleted...
May  4 11:54:44.847: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
May  4 11:54:44.851: INFO: kindnet-f8plf from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  4 11:54:44.851: INFO: 	Container kindnet-cni ready: true, restart count 1
May  4 11:54:44.851: INFO: kube-proxy-vrswj from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  4 11:54:44.851: INFO: 	Container kube-proxy ready: true, restart count 0
May  4 11:54:44.851: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
May  4 11:54:44.855: INFO: kindnet-mcdh2 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  4 11:54:44.855: INFO: 	Container kindnet-cni ready: true, restart count 0
May  4 11:54:44.855: INFO: annotationupdatef2465439-83d8-48bb-a53d-2272ae2029a0 from downward-api-9300 started at 2020-05-04 11:54:31 +0000 UTC (1 container statuses recorded)
May  4 11:54:44.855: INFO: 	Container client-container ready: true, restart count 0
May  4 11:54:44.855: INFO: kube-proxy-mmnb6 from kube-system started at 2020-04-29 09:31:40 +0000 UTC (1 container statuses recorded)
May  4 11:54:44.855: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-e903b018-c945-41ad-899b-389c04d1efb2 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-e903b018-c945-41ad-899b-389c04d1efb2 off the node kali-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-e903b018-c945-41ad-899b-389c04d1efb2
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:55:01.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-327" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:16.337 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":166,"skipped":2798,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:55:01.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May  4 11:55:05.222: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:55:05.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6472" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":2840,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:55:05.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May  4 11:55:05.370: INFO: PodSpec: initContainers in spec.initContainers
May  4 11:55:59.476: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-4e79b7d7-3060-499a-9ec2-2e9c3b86a91a", GenerateName:"", Namespace:"init-container-7409", SelfLink:"/api/v1/namespaces/init-container-7409/pods/pod-init-4e79b7d7-3060-499a-9ec2-2e9c3b86a91a", UID:"78f9116d-2806-4886-b124-8f21787434ac", ResourceVersion:"1432842", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63724190105, loc:(*time.Location)(0x7b200c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"370772535"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003ce3220), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003ce3240)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc003ce3260), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc003ce3280)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6zvvk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0049dbb80), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6zvvk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6zvvk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6zvvk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002b89cb8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0024110a0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b89d50)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002b89d70)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002b89d78), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002b89d7c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190105, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190105, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190105, loc:(*time.Location)(0x7b200c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190105, loc:(*time.Location)(0x7b200c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.15", PodIP:"10.244.2.193", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.193"}}, StartTime:(*v1.Time)(0xc003ce32a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc003ce32e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0024111f0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://14dd36cdbed35e59df93dc8b436fabe2b0fce89146627f807e0aafc8ae847121", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003ce3300), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc003ce32c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc002b89e0f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:55:59.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7409" for this suite.

• [SLOW TEST:54.410 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":168,"skipped":2865,"failed":0}
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:55:59.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
May  4 11:55:59.851: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:56:13.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-693" for this suite.

• [SLOW TEST:13.743 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":169,"skipped":2865,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:56:13.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
May  4 11:56:13.509: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config api-versions'
May  4 11:56:13.770: INFO: stderr: ""
May  4 11:56:13.770: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:56:13.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9518" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":170,"skipped":2881,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:56:13.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
May  4 11:56:13.898: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5469'
May  4 11:56:16.785: INFO: stderr: ""
May  4 11:56:16.785: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
May  4 11:56:16.785: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5469'
May  4 11:56:16.880: INFO: stderr: ""
May  4 11:56:16.880: INFO: stdout: "update-demo-nautilus-cnvkd update-demo-nautilus-k4bsn "
May  4 11:56:16.880: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cnvkd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5469'
May  4 11:56:16.995: INFO: stderr: ""
May  4 11:56:16.995: INFO: stdout: ""
May  4 11:56:16.995: INFO: update-demo-nautilus-cnvkd is created but not running
May  4 11:56:21.995: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5469'
May  4 11:56:22.090: INFO: stderr: ""
May  4 11:56:22.090: INFO: stdout: "update-demo-nautilus-cnvkd update-demo-nautilus-k4bsn "
May  4 11:56:22.090: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cnvkd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5469'
May  4 11:56:22.190: INFO: stderr: ""
May  4 11:56:22.190: INFO: stdout: "true"
May  4 11:56:22.190: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cnvkd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5469'
May  4 11:56:22.292: INFO: stderr: ""
May  4 11:56:22.292: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  4 11:56:22.292: INFO: validating pod update-demo-nautilus-cnvkd
May  4 11:56:22.296: INFO: got data: {
  "image": "nautilus.jpg"
}

May  4 11:56:22.297: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  4 11:56:22.297: INFO: update-demo-nautilus-cnvkd is verified up and running
May  4 11:56:22.297: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4bsn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5469'
May  4 11:56:22.390: INFO: stderr: ""
May  4 11:56:22.390: INFO: stdout: "true"
May  4 11:56:22.390: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4bsn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5469'
May  4 11:56:22.486: INFO: stderr: ""
May  4 11:56:22.487: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
May  4 11:56:22.487: INFO: validating pod update-demo-nautilus-k4bsn
May  4 11:56:22.490: INFO: got data: {
  "image": "nautilus.jpg"
}

May  4 11:56:22.491: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
May  4 11:56:22.491: INFO: update-demo-nautilus-k4bsn is verified up and running
STEP: using delete to clean up resources
May  4 11:56:22.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5469'
May  4 11:56:22.600: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
May  4 11:56:22.600: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
May  4 11:56:22.600: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5469'
May  4 11:56:22.726: INFO: stderr: "No resources found in kubectl-5469 namespace.\n"
May  4 11:56:22.726: INFO: stdout: ""
May  4 11:56:22.726: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5469 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May  4 11:56:22.836: INFO: stderr: ""
May  4 11:56:22.836: INFO: stdout: "update-demo-nautilus-cnvkd\nupdate-demo-nautilus-k4bsn\n"
May  4 11:56:23.336: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5469'
May  4 11:56:23.427: INFO: stderr: "No resources found in kubectl-5469 namespace.\n"
May  4 11:56:23.427: INFO: stdout: ""
May  4 11:56:23.427: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5469 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
May  4 11:56:23.525: INFO: stderr: ""
May  4 11:56:23.525: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:56:23.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5469" for this suite.

• [SLOW TEST:9.753 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":171,"skipped":2900,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:56:23.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 11:56:23.912: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7373'
May  4 11:56:24.401: INFO: stderr: ""
May  4 11:56:24.401: INFO: stdout: "replicationcontroller/agnhost-master created\n"
May  4 11:56:24.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7373'
May  4 11:56:24.707: INFO: stderr: ""
May  4 11:56:24.707: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
May  4 11:56:25.711: INFO: Selector matched 1 pods for map[app:agnhost]
May  4 11:56:25.711: INFO: Found 0 / 1
May  4 11:56:26.711: INFO: Selector matched 1 pods for map[app:agnhost]
May  4 11:56:26.711: INFO: Found 0 / 1
May  4 11:56:27.711: INFO: Selector matched 1 pods for map[app:agnhost]
May  4 11:56:27.711: INFO: Found 0 / 1
May  4 11:56:28.719: INFO: Selector matched 1 pods for map[app:agnhost]
May  4 11:56:28.719: INFO: Found 1 / 1
May  4 11:56:28.719: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
May  4 11:56:28.750: INFO: Selector matched 1 pods for map[app:agnhost]
May  4 11:56:28.750: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
May  4 11:56:28.750: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe pod agnhost-master-knm84 --namespace=kubectl-7373'
May  4 11:56:28.884: INFO: stderr: ""
May  4 11:56:28.884: INFO: stdout: "Name:         agnhost-master-knm84\nNamespace:    kubectl-7373\nPriority:     0\nNode:         kali-worker2/172.17.0.18\nStart Time:   Mon, 04 May 2020 11:56:24 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.244.1.130\nIPs:\n  IP:           10.244.1.130\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   containerd://40d70876c0208ef39e332dc32a353e6fe732337d9986d6428a7967ec8ed88322\n    Image:          us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Image ID:       us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 04 May 2020 11:56:27 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-b4w4q (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-b4w4q:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-b4w4q\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                   Message\n  ----    ------     ----  ----                   -------\n  Normal  Scheduled  4s    default-scheduler      Successfully assigned kubectl-7373/agnhost-master-knm84 to kali-worker2\n  Normal  Pulled     3s    kubelet, kali-worker2  Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n  Normal  Created    2s    kubelet, kali-worker2  Created container agnhost-master\n  Normal  Started    1s    kubelet, kali-worker2  Started container agnhost-master\n"
May  4 11:56:28.885: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-7373'
May  4 11:56:29.009: INFO: stderr: ""
May  4 11:56:29.009: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-7373\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  5s    replication-controller  Created pod: agnhost-master-knm84\n"
May  4 11:56:29.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-7373'
May  4 11:56:29.111: INFO: stderr: ""
May  4 11:56:29.111: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-7373\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.107.112.203\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.244.1.130:6379\nSession Affinity:  None\nEvents:            \n"
May  4 11:56:29.115: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe node kali-control-plane'
May  4 11:56:29.249: INFO: stderr: ""
May  4 11:56:29.249: INFO: stdout: "Name:               kali-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=kali-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Wed, 29 Apr 2020 09:30:59 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nLease:\n  HolderIdentity:  kali-control-plane\n  AcquireTime:     \n  RenewTime:       Mon, 04 May 2020 11:56:19 +0000\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Mon, 04 May 2020 11:55:34 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Mon, 04 May 2020 11:55:34 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Mon, 04 May 2020 11:55:34 +0000   Wed, 29 Apr 2020 09:30:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Mon, 04 May 2020 11:55:34 +0000   Wed, 29 Apr 2020 09:31:34 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.17.0.19\n  Hostname:    kali-control-plane\nCapacity:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nAllocatable:\n  cpu:                16\n  ephemeral-storage:  2303189964Ki\n  hugepages-1Gi:      0\n  hugepages-2Mi:      0\n  memory:             131759892Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 2146cf85bed648199604ab2e0e9ac609\n  System UUID:                e83c0db4-babe-44fc-9dad-b5eeae6d23fd\n  Boot ID:                    ca2aa731-f890-4956-92a1-ff8c7560d571\n  Kernel Version:             4.15.0-88-generic\n  OS Image:                   Ubuntu 19.10\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  containerd://1.3.3-14-g449e9269\n  Kubelet Version:            v1.18.2\n  Kube-Proxy Version:         v1.18.2\nPodCIDR:                      10.244.0.0/24\nPodCIDRs:                     10.244.0.0/24\nNon-terminated Pods:          (9 in total)\n  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---\n  kube-system                 coredns-66bff467f8-rvq2k                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     5d2h\n  kube-system                 coredns-66bff467f8-w6zxd                      100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     5d2h\n  kube-system                 etcd-kali-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         5d2h\n  kube-system                 kindnet-65djz                                 100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      5d2h\n  kube-system                 kube-apiserver-kali-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         5d2h\n  kube-system                 kube-controller-manager-kali-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         5d2h\n  kube-system                 kube-proxy-pnhtq                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         5d2h\n  kube-system                 kube-scheduler-kali-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         5d2h\n  local-path-storage          local-path-provisioner-bd4bb6b75-6l9ph        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5d2h\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\n  hugepages-1Gi      0 (0%)      0 (0%)\n  hugepages-2Mi      0 (0%)      0 (0%)\nEvents:              \n"
May  4 11:56:29.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config describe namespace kubectl-7373'
May  4 11:56:29.353: INFO: stderr: ""
May  4 11:56:29.353: INFO: stdout: "Name:         kubectl-7373\nLabels:       e2e-framework=kubectl\n              e2e-run=c3d571a7-3318-49f9-9e98-d2363c01e166\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:56:29.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7373" for this suite.

• [SLOW TEST:5.827 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:978
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":275,"completed":172,"skipped":2903,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:56:29.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 11:56:29.487: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
May  4 11:56:29.505: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:29.518: INFO: Number of nodes with available pods: 0
May  4 11:56:29.518: INFO: Node kali-worker is running more than one daemon pod
May  4 11:56:30.523: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:30.527: INFO: Number of nodes with available pods: 0
May  4 11:56:30.527: INFO: Node kali-worker is running more than one daemon pod
May  4 11:56:31.523: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:31.527: INFO: Number of nodes with available pods: 0
May  4 11:56:31.527: INFO: Node kali-worker is running more than one daemon pod
May  4 11:56:32.522: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:32.526: INFO: Number of nodes with available pods: 0
May  4 11:56:32.526: INFO: Node kali-worker is running more than one daemon pod
May  4 11:56:33.523: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:33.527: INFO: Number of nodes with available pods: 1
May  4 11:56:33.527: INFO: Node kali-worker is running more than one daemon pod
May  4 11:56:34.529: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:34.532: INFO: Number of nodes with available pods: 2
May  4 11:56:34.532: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
May  4 11:56:34.778: INFO: Wrong image for pod: daemon-set-gjzsx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:34.778: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:34.976: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:35.987: INFO: Wrong image for pod: daemon-set-gjzsx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:35.987: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:35.992: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:36.981: INFO: Wrong image for pod: daemon-set-gjzsx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:36.981: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:36.985: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:37.981: INFO: Wrong image for pod: daemon-set-gjzsx. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:37.981: INFO: Pod daemon-set-gjzsx is not available
May  4 11:56:37.981: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:37.984: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:38.981: INFO: Pod daemon-set-ktntv is not available
May  4 11:56:38.981: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:38.986: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:40.005: INFO: Pod daemon-set-ktntv is not available
May  4 11:56:40.005: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:40.009: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:40.986: INFO: Pod daemon-set-ktntv is not available
May  4 11:56:40.986: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:40.990: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:41.980: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:41.984: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:42.981: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:42.981: INFO: Pod daemon-set-xcqbh is not available
May  4 11:56:42.985: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:43.981: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:43.981: INFO: Pod daemon-set-xcqbh is not available
May  4 11:56:43.985: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:44.980: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:44.981: INFO: Pod daemon-set-xcqbh is not available
May  4 11:56:44.984: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:45.992: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:45.992: INFO: Pod daemon-set-xcqbh is not available
May  4 11:56:45.996: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:46.980: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:46.980: INFO: Pod daemon-set-xcqbh is not available
May  4 11:56:46.983: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:47.986: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:47.986: INFO: Pod daemon-set-xcqbh is not available
May  4 11:56:47.990: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:48.986: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:48.986: INFO: Pod daemon-set-xcqbh is not available
May  4 11:56:48.989: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:49.992: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:49.992: INFO: Pod daemon-set-xcqbh is not available
May  4 11:56:49.997: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:50.981: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:50.981: INFO: Pod daemon-set-xcqbh is not available
May  4 11:56:50.986: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:51.982: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:51.982: INFO: Pod daemon-set-xcqbh is not available
May  4 11:56:51.986: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:52.982: INFO: Wrong image for pod: daemon-set-xcqbh. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
May  4 11:56:52.982: INFO: Pod daemon-set-xcqbh is not available
May  4 11:56:52.986: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:53.981: INFO: Pod daemon-set-6cfhh is not available
May  4 11:56:53.986: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
May  4 11:56:53.990: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:53.994: INFO: Number of nodes with available pods: 1
May  4 11:56:53.994: INFO: Node kali-worker is running more than one daemon pod
May  4 11:56:54.999: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:55.003: INFO: Number of nodes with available pods: 1
May  4 11:56:55.003: INFO: Node kali-worker is running more than one daemon pod
May  4 11:56:55.999: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:56.003: INFO: Number of nodes with available pods: 1
May  4 11:56:56.003: INFO: Node kali-worker is running more than one daemon pod
May  4 11:56:56.999: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 11:56:57.001: INFO: Number of nodes with available pods: 2
May  4 11:56:57.001: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4261, will wait for the garbage collector to delete the pods
May  4 11:56:57.079: INFO: Deleting DaemonSet.extensions daemon-set took: 12.227404ms
May  4 11:56:57.380: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.319091ms
May  4 11:57:03.793: INFO: Number of nodes with available pods: 0
May  4 11:57:03.793: INFO: Number of running nodes: 0, number of available pods: 0
May  4 11:57:03.795: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4261/daemonsets","resourceVersion":"1433264"},"items":null}

May  4 11:57:03.797: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4261/pods","resourceVersion":"1433264"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:57:03.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4261" for this suite.

• [SLOW TEST:34.453 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":173,"skipped":2928,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:57:03.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-e9e07a3c-f432-4d5d-aeb5-4722ad6da92e
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-e9e07a3c-f432-4d5d-aeb5-4722ad6da92e
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:58:20.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1563" for this suite.

• [SLOW TEST:76.617 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2957,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:58:20.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-c4dc1661-58ec-4f54-838c-de70e14fca9b
STEP: Creating a pod to test consume secrets
May  4 11:58:20.555: INFO: Waiting up to 5m0s for pod "pod-secrets-daf444e4-01d6-4ff6-9503-5cf8db7c8340" in namespace "secrets-4235" to be "Succeeded or Failed"
May  4 11:58:20.566: INFO: Pod "pod-secrets-daf444e4-01d6-4ff6-9503-5cf8db7c8340": Phase="Pending", Reason="", readiness=false. Elapsed: 11.325955ms
May  4 11:58:22.712: INFO: Pod "pod-secrets-daf444e4-01d6-4ff6-9503-5cf8db7c8340": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156821295s
May  4 11:58:24.715: INFO: Pod "pod-secrets-daf444e4-01d6-4ff6-9503-5cf8db7c8340": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.160188679s
STEP: Saw pod success
May  4 11:58:24.715: INFO: Pod "pod-secrets-daf444e4-01d6-4ff6-9503-5cf8db7c8340" satisfied condition "Succeeded or Failed"
May  4 11:58:24.718: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-daf444e4-01d6-4ff6-9503-5cf8db7c8340 container secret-volume-test: 
STEP: delete the pod
May  4 11:58:24.814: INFO: Waiting for pod pod-secrets-daf444e4-01d6-4ff6-9503-5cf8db7c8340 to disappear
May  4 11:58:24.824: INFO: Pod pod-secrets-daf444e4-01d6-4ff6-9503-5cf8db7c8340 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:58:24.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4235" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":175,"skipped":2972,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:58:24.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:58:36.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6633" for this suite.

• [SLOW TEST:11.231 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":176,"skipped":3005,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:58:36.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6269
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-6269
May  4 11:58:36.190: INFO: Found 0 stateful pods, waiting for 1
May  4 11:58:46.195: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May  4 11:58:46.268: INFO: Deleting all statefulset in ns statefulset-6269
May  4 11:58:46.272: INFO: Scaling statefulset ss to 0
May  4 11:59:06.343: INFO: Waiting for statefulset status.replicas updated to 0
May  4 11:59:06.346: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:59:06.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6269" for this suite.

• [SLOW TEST:30.299 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":177,"skipped":3083,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:59:06.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 11:59:06.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May  4 11:59:09.370: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3893 create -f -'
May  4 11:59:12.944: INFO: stderr: ""
May  4 11:59:12.944: INFO: stdout: "e2e-test-crd-publish-openapi-5295-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May  4 11:59:12.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3893 delete e2e-test-crd-publish-openapi-5295-crds test-cr'
May  4 11:59:13.064: INFO: stderr: ""
May  4 11:59:13.064: INFO: stdout: "e2e-test-crd-publish-openapi-5295-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
May  4 11:59:13.064: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3893 apply -f -'
May  4 11:59:13.346: INFO: stderr: ""
May  4 11:59:13.346: INFO: stdout: "e2e-test-crd-publish-openapi-5295-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
May  4 11:59:13.347: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3893 delete e2e-test-crd-publish-openapi-5295-crds test-cr'
May  4 11:59:13.474: INFO: stderr: ""
May  4 11:59:13.474: INFO: stdout: "e2e-test-crd-publish-openapi-5295-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
May  4 11:59:13.474: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5295-crds'
May  4 11:59:13.730: INFO: stderr: ""
May  4 11:59:13.730: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5295-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:59:16.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3893" for this suite.

• [SLOW TEST:10.300 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":178,"skipped":3092,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:59:16.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-3348
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-3348
I0504 11:59:16.880761       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-3348, replica count: 2
I0504 11:59:19.931342       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0504 11:59:22.931662       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
May  4 11:59:22.931: INFO: Creating new exec pod
May  4 11:59:27.970: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-3348 execpodswlvq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
May  4 11:59:28.176: INFO: stderr: "I0504 11:59:28.105108    3198 log.go:172] (0xc0007cdef0) (0xc0005275e0) Create stream\nI0504 11:59:28.105314    3198 log.go:172] (0xc0007cdef0) (0xc0005275e0) Stream added, broadcasting: 1\nI0504 11:59:28.107875    3198 log.go:172] (0xc0007cdef0) Reply frame received for 1\nI0504 11:59:28.107930    3198 log.go:172] (0xc0007cdef0) (0xc0007e0000) Create stream\nI0504 11:59:28.107949    3198 log.go:172] (0xc0007cdef0) (0xc0007e0000) Stream added, broadcasting: 3\nI0504 11:59:28.108903    3198 log.go:172] (0xc0007cdef0) Reply frame received for 3\nI0504 11:59:28.108986    3198 log.go:172] (0xc0007cdef0) (0xc00043e000) Create stream\nI0504 11:59:28.109026    3198 log.go:172] (0xc0007cdef0) (0xc00043e000) Stream added, broadcasting: 5\nI0504 11:59:28.110193    3198 log.go:172] (0xc0007cdef0) Reply frame received for 5\nI0504 11:59:28.169100    3198 log.go:172] (0xc0007cdef0) Data frame received for 3\nI0504 11:59:28.169364    3198 log.go:172] (0xc0007e0000) (3) Data frame handling\nI0504 11:59:28.169558    3198 log.go:172] (0xc0007cdef0) Data frame received for 5\nI0504 11:59:28.169583    3198 log.go:172] (0xc00043e000) (5) Data frame handling\nI0504 11:59:28.169632    3198 log.go:172] (0xc00043e000) (5) Data frame sent\nI0504 11:59:28.169650    3198 log.go:172] (0xc0007cdef0) Data frame received for 5\nI0504 11:59:28.169662    3198 log.go:172] (0xc00043e000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0504 11:59:28.171467    3198 log.go:172] (0xc0007cdef0) Data frame received for 1\nI0504 11:59:28.171496    3198 log.go:172] (0xc0005275e0) (1) Data frame handling\nI0504 11:59:28.171512    3198 log.go:172] (0xc0005275e0) (1) Data frame sent\nI0504 11:59:28.171529    3198 log.go:172] (0xc0007cdef0) (0xc0005275e0) Stream removed, broadcasting: 1\nI0504 11:59:28.171559    3198 log.go:172] (0xc0007cdef0) Go away received\nI0504 11:59:28.171923    3198 log.go:172] (0xc0007cdef0) (0xc0005275e0) Stream removed, broadcasting: 1\nI0504 11:59:28.171940    3198 log.go:172] (0xc0007cdef0) (0xc0007e0000) Stream removed, broadcasting: 3\nI0504 11:59:28.171950    3198 log.go:172] (0xc0007cdef0) (0xc00043e000) Stream removed, broadcasting: 5\n"
May  4 11:59:28.176: INFO: stdout: ""
May  4 11:59:28.177: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-3348 execpodswlvq -- /bin/sh -x -c nc -zv -t -w 2 10.111.52.120 80'
May  4 11:59:28.402: INFO: stderr: "I0504 11:59:28.328746    3218 log.go:172] (0xc000978000) (0xc000ab2000) Create stream\nI0504 11:59:28.328857    3218 log.go:172] (0xc000978000) (0xc000ab2000) Stream added, broadcasting: 1\nI0504 11:59:28.341880    3218 log.go:172] (0xc000978000) Reply frame received for 1\nI0504 11:59:28.341928    3218 log.go:172] (0xc000978000) (0xc000ab20a0) Create stream\nI0504 11:59:28.341940    3218 log.go:172] (0xc000978000) (0xc000ab20a0) Stream added, broadcasting: 3\nI0504 11:59:28.342888    3218 log.go:172] (0xc000978000) Reply frame received for 3\nI0504 11:59:28.342922    3218 log.go:172] (0xc000978000) (0xc00063f2c0) Create stream\nI0504 11:59:28.342932    3218 log.go:172] (0xc000978000) (0xc00063f2c0) Stream added, broadcasting: 5\nI0504 11:59:28.343683    3218 log.go:172] (0xc000978000) Reply frame received for 5\nI0504 11:59:28.394978    3218 log.go:172] (0xc000978000) Data frame received for 3\nI0504 11:59:28.395006    3218 log.go:172] (0xc000ab20a0) (3) Data frame handling\nI0504 11:59:28.395112    3218 log.go:172] (0xc000978000) Data frame received for 5\nI0504 11:59:28.395132    3218 log.go:172] (0xc00063f2c0) (5) Data frame handling\nI0504 11:59:28.395147    3218 log.go:172] (0xc00063f2c0) (5) Data frame sent\nI0504 11:59:28.395154    3218 log.go:172] (0xc000978000) Data frame received for 5\nI0504 11:59:28.395162    3218 log.go:172] (0xc00063f2c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.52.120 80\nConnection to 10.111.52.120 80 port [tcp/http] succeeded!\nI0504 11:59:28.396616    3218 log.go:172] (0xc000978000) Data frame received for 1\nI0504 11:59:28.396655    3218 log.go:172] (0xc000ab2000) (1) Data frame handling\nI0504 11:59:28.396687    3218 log.go:172] (0xc000ab2000) (1) Data frame sent\nI0504 11:59:28.396706    3218 log.go:172] (0xc000978000) (0xc000ab2000) Stream removed, broadcasting: 1\nI0504 11:59:28.396748    3218 log.go:172] (0xc000978000) Go away received\nI0504 11:59:28.397331    3218 log.go:172] (0xc000978000) (0xc000ab2000) Stream removed, broadcasting: 1\nI0504 11:59:28.397352    3218 log.go:172] (0xc000978000) (0xc000ab20a0) Stream removed, broadcasting: 3\nI0504 11:59:28.397363    3218 log.go:172] (0xc000978000) (0xc00063f2c0) Stream removed, broadcasting: 5\n"
May  4 11:59:28.402: INFO: stdout: ""
May  4 11:59:28.402: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-3348 execpodswlvq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.15 30361'
May  4 11:59:28.620: INFO: stderr: "I0504 11:59:28.541088    3238 log.go:172] (0xc000ae8000) (0xc000a66000) Create stream\nI0504 11:59:28.541348    3238 log.go:172] (0xc000ae8000) (0xc000a66000) Stream added, broadcasting: 1\nI0504 11:59:28.548549    3238 log.go:172] (0xc000ae8000) Reply frame received for 1\nI0504 11:59:28.548621    3238 log.go:172] (0xc000ae8000) (0xc0008032c0) Create stream\nI0504 11:59:28.548644    3238 log.go:172] (0xc000ae8000) (0xc0008032c0) Stream added, broadcasting: 3\nI0504 11:59:28.550030    3238 log.go:172] (0xc000ae8000) Reply frame received for 3\nI0504 11:59:28.550084    3238 log.go:172] (0xc000ae8000) (0xc000a660a0) Create stream\nI0504 11:59:28.550102    3238 log.go:172] (0xc000ae8000) (0xc000a660a0) Stream added, broadcasting: 5\nI0504 11:59:28.550973    3238 log.go:172] (0xc000ae8000) Reply frame received for 5\nI0504 11:59:28.615088    3238 log.go:172] (0xc000ae8000) Data frame received for 3\nI0504 11:59:28.615116    3238 log.go:172] (0xc0008032c0) (3) Data frame handling\nI0504 11:59:28.615153    3238 log.go:172] (0xc000ae8000) Data frame received for 5\nI0504 11:59:28.615184    3238 log.go:172] (0xc000a660a0) (5) Data frame handling\nI0504 11:59:28.615219    3238 log.go:172] (0xc000a660a0) (5) Data frame sent\nI0504 11:59:28.615247    3238 log.go:172] (0xc000ae8000) Data frame received for 5\nI0504 11:59:28.615257    3238 log.go:172] (0xc000a660a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.15 30361\nConnection to 172.17.0.15 30361 port [tcp/30361] succeeded!\nI0504 11:59:28.616232    3238 log.go:172] (0xc000ae8000) Data frame received for 1\nI0504 11:59:28.616248    3238 log.go:172] (0xc000a66000) (1) Data frame handling\nI0504 11:59:28.616259    3238 log.go:172] (0xc000a66000) (1) Data frame sent\nI0504 11:59:28.616310    3238 log.go:172] (0xc000ae8000) (0xc000a66000) Stream removed, broadcasting: 1\nI0504 11:59:28.616452    3238 log.go:172] (0xc000ae8000) Go away received\nI0504 11:59:28.616618    3238 log.go:172] (0xc000ae8000) (0xc000a66000) Stream removed, broadcasting: 1\nI0504 11:59:28.616639    3238 log.go:172] (0xc000ae8000) (0xc0008032c0) Stream removed, broadcasting: 3\nI0504 11:59:28.616654    3238 log.go:172] (0xc000ae8000) (0xc000a660a0) Stream removed, broadcasting: 5\n"
May  4 11:59:28.620: INFO: stdout: ""
May  4 11:59:28.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=services-3348 execpodswlvq -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.18 30361'
May  4 11:59:28.822: INFO: stderr: "I0504 11:59:28.751577    3260 log.go:172] (0xc0009662c0) (0xc00077e140) Create stream\nI0504 11:59:28.751653    3260 log.go:172] (0xc0009662c0) (0xc00077e140) Stream added, broadcasting: 1\nI0504 11:59:28.754310    3260 log.go:172] (0xc0009662c0) Reply frame received for 1\nI0504 11:59:28.754353    3260 log.go:172] (0xc0009662c0) (0xc0006a72c0) Create stream\nI0504 11:59:28.754368    3260 log.go:172] (0xc0009662c0) (0xc0006a72c0) Stream added, broadcasting: 3\nI0504 11:59:28.755284    3260 log.go:172] (0xc0009662c0) Reply frame received for 3\nI0504 11:59:28.755346    3260 log.go:172] (0xc0009662c0) (0xc00077e320) Create stream\nI0504 11:59:28.755377    3260 log.go:172] (0xc0009662c0) (0xc00077e320) Stream added, broadcasting: 5\nI0504 11:59:28.756257    3260 log.go:172] (0xc0009662c0) Reply frame received for 5\nI0504 11:59:28.814806    3260 log.go:172] (0xc0009662c0) Data frame received for 3\nI0504 11:59:28.814843    3260 log.go:172] (0xc0006a72c0) (3) Data frame handling\nI0504 11:59:28.815002    3260 log.go:172] (0xc0009662c0) Data frame received for 5\nI0504 11:59:28.815028    3260 log.go:172] (0xc00077e320) (5) Data frame handling\nI0504 11:59:28.815059    3260 log.go:172] (0xc00077e320) (5) Data frame sent\nI0504 11:59:28.815083    3260 log.go:172] (0xc0009662c0) Data frame received for 5\nI0504 11:59:28.815105    3260 log.go:172] (0xc00077e320) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.18 30361\nConnection to 172.17.0.18 30361 port [tcp/30361] succeeded!\nI0504 11:59:28.816908    3260 log.go:172] (0xc0009662c0) Data frame received for 1\nI0504 11:59:28.816938    3260 log.go:172] (0xc00077e140) (1) Data frame handling\nI0504 11:59:28.816976    3260 log.go:172] (0xc00077e140) (1) Data frame sent\nI0504 11:59:28.816998    3260 log.go:172] (0xc0009662c0) (0xc00077e140) Stream removed, broadcasting: 1\nI0504 11:59:28.817032    3260 log.go:172] (0xc0009662c0) Go away received\nI0504 11:59:28.817652    3260 log.go:172] (0xc0009662c0) (0xc00077e140) Stream removed, broadcasting: 1\nI0504 11:59:28.817675    3260 log.go:172] (0xc0009662c0) (0xc0006a72c0) Stream removed, broadcasting: 3\nI0504 11:59:28.817693    3260 log.go:172] (0xc0009662c0) (0xc00077e320) Stream removed, broadcasting: 5\n"
May  4 11:59:28.822: INFO: stdout: ""
May  4 11:59:28.822: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:59:28.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3348" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:12.228 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":179,"skipped":3114,"failed":0}
SS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:59:28.891: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 11:59:28.960: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
May  4 11:59:29.006: INFO: Number of nodes with available pods: 0
May  4 11:59:29.006: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
May  4 11:59:29.069: INFO: Number of nodes with available pods: 0
May  4 11:59:29.069: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:30.161: INFO: Number of nodes with available pods: 0
May  4 11:59:30.161: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:31.073: INFO: Number of nodes with available pods: 0
May  4 11:59:31.073: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:32.073: INFO: Number of nodes with available pods: 0
May  4 11:59:32.073: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:33.073: INFO: Number of nodes with available pods: 1
May  4 11:59:33.073: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
May  4 11:59:33.137: INFO: Number of nodes with available pods: 1
May  4 11:59:33.137: INFO: Number of running nodes: 0, number of available pods: 1
May  4 11:59:34.141: INFO: Number of nodes with available pods: 0
May  4 11:59:34.141: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
May  4 11:59:34.197: INFO: Number of nodes with available pods: 0
May  4 11:59:34.197: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:35.202: INFO: Number of nodes with available pods: 0
May  4 11:59:35.202: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:36.202: INFO: Number of nodes with available pods: 0
May  4 11:59:36.203: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:37.212: INFO: Number of nodes with available pods: 0
May  4 11:59:37.212: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:38.202: INFO: Number of nodes with available pods: 0
May  4 11:59:38.202: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:39.201: INFO: Number of nodes with available pods: 0
May  4 11:59:39.201: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:40.202: INFO: Number of nodes with available pods: 0
May  4 11:59:40.202: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:41.202: INFO: Number of nodes with available pods: 0
May  4 11:59:41.202: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:42.202: INFO: Number of nodes with available pods: 0
May  4 11:59:42.202: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:43.201: INFO: Number of nodes with available pods: 0
May  4 11:59:43.202: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:44.202: INFO: Number of nodes with available pods: 0
May  4 11:59:44.202: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:45.299: INFO: Number of nodes with available pods: 0
May  4 11:59:45.299: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:46.202: INFO: Number of nodes with available pods: 0
May  4 11:59:46.202: INFO: Node kali-worker2 is running more than one daemon pod
May  4 11:59:47.209: INFO: Number of nodes with available pods: 1
May  4 11:59:47.209: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2073, will wait for the garbage collector to delete the pods
May  4 11:59:47.280: INFO: Deleting DaemonSet.extensions daemon-set took: 6.96929ms
May  4 11:59:47.580: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.291687ms
May  4 11:59:53.483: INFO: Number of nodes with available pods: 0
May  4 11:59:53.483: INFO: Number of running nodes: 0, number of available pods: 0
May  4 11:59:53.486: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2073/daemonsets","resourceVersion":"1434098"},"items":null}

May  4 11:59:53.488: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2073/pods","resourceVersion":"1434098"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:59:53.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2073" for this suite.

• [SLOW TEST:24.637 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":180,"skipped":3116,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:59:53.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with configMap that has name projected-configmap-test-upd-13b356c5-b3df-4276-8831-4c13c5e0f474
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-13b356c5-b3df-4276-8831-4c13c5e0f474
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 11:59:59.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2839" for this suite.

• [SLOW TEST:6.266 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3134,"failed":0}
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 11:59:59.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
May  4 11:59:59.891: INFO: Waiting up to 5m0s for pod "client-containers-458d462a-b8df-4b6b-95a7-20c6f545f4aa" in namespace "containers-6268" to be "Succeeded or Failed"
May  4 11:59:59.906: INFO: Pod "client-containers-458d462a-b8df-4b6b-95a7-20c6f545f4aa": Phase="Pending", Reason="", readiness=false. Elapsed: 14.434607ms
May  4 12:00:01.909: INFO: Pod "client-containers-458d462a-b8df-4b6b-95a7-20c6f545f4aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01719908s
May  4 12:00:03.912: INFO: Pod "client-containers-458d462a-b8df-4b6b-95a7-20c6f545f4aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020840294s
STEP: Saw pod success
May  4 12:00:03.912: INFO: Pod "client-containers-458d462a-b8df-4b6b-95a7-20c6f545f4aa" satisfied condition "Succeeded or Failed"
May  4 12:00:03.915: INFO: Trying to get logs from node kali-worker pod client-containers-458d462a-b8df-4b6b-95a7-20c6f545f4aa container test-container: 
STEP: delete the pod
May  4 12:00:03.952: INFO: Waiting for pod client-containers-458d462a-b8df-4b6b-95a7-20c6f545f4aa to disappear
May  4 12:00:03.960: INFO: Pod client-containers-458d462a-b8df-4b6b-95a7-20c6f545f4aa no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:00:03.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6268" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":182,"skipped":3134,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:00:03.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating server pod server in namespace prestop-2099
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-2099
STEP: Deleting pre-stop pod
May  4 12:00:17.177: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:00:17.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-2099" for this suite.

• [SLOW TEST:13.251 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":275,"completed":183,"skipped":3207,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:00:17.220: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
May  4 12:00:17.302: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix004938846/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:00:17.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3897" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":184,"skipped":3211,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:00:17.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  4 12:00:18.463: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  4 12:00:20.528: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190418, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190418, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190418, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190418, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  4 12:00:22.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190418, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190418, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190418, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724190418, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  4 12:00:25.559: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:00:25.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6107" for this suite.
STEP: Destroying namespace "webhook-6107-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.048 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":185,"skipped":3212,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:00:25.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
May  4 12:00:30.314: INFO: Successfully updated pod "pod-update-b03ce0cd-e452-40cf-8a06-cf94184942aa"
STEP: verifying the updated pod is in kubernetes
May  4 12:00:30.328: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:00:30.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4018" for this suite.
•{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3225,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:00:30.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 12:00:30.551: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"bf384855-4818-4eed-8e96-30d917cc4391", Controller:(*bool)(0xc005579142), BlockOwnerDeletion:(*bool)(0xc005579143)}}
May  4 12:00:30.571: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"a3f49131-5384-4b81-97d2-ef7dc25870f2", Controller:(*bool)(0xc0055388f2), BlockOwnerDeletion:(*bool)(0xc0055388f3)}}
May  4 12:00:30.610: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"1625574e-61ad-4b44-9047-f9e391d95845", Controller:(*bool)(0xc005538ada), BlockOwnerDeletion:(*bool)(0xc005538adb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:00:35.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9872" for this suite.

• [SLOW TEST:5.360 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":187,"skipped":3263,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:00:35.699: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 12:00:36.011: INFO: Creating ReplicaSet my-hostname-basic-dccd87a2-e832-4cc3-a865-ed47209e384c
May  4 12:00:36.089: INFO: Pod name my-hostname-basic-dccd87a2-e832-4cc3-a865-ed47209e384c: Found 0 pods out of 1
May  4 12:00:41.119: INFO: Pod name my-hostname-basic-dccd87a2-e832-4cc3-a865-ed47209e384c: Found 1 pods out of 1
May  4 12:00:41.119: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-dccd87a2-e832-4cc3-a865-ed47209e384c" is running
May  4 12:00:41.122: INFO: Pod "my-hostname-basic-dccd87a2-e832-4cc3-a865-ed47209e384c-4sj75" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-04 12:00:36 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-04 12:00:39 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-04 12:00:39 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-04 12:00:36 +0000 UTC Reason: Message:}])
May  4 12:00:41.123: INFO: Trying to dial the pod
May  4 12:00:46.143: INFO: Controller my-hostname-basic-dccd87a2-e832-4cc3-a865-ed47209e384c: Got expected result from replica 1 [my-hostname-basic-dccd87a2-e832-4cc3-a865-ed47209e384c-4sj75]: "my-hostname-basic-dccd87a2-e832-4cc3-a865-ed47209e384c-4sj75", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:00:46.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-605" for this suite.

• [SLOW TEST:10.451 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":188,"skipped":3300,"failed":0}
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:00:46.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
May  4 12:00:49.331: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:00:49.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3399" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":189,"skipped":3304,"failed":0}
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:00:49.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-6e553aab-05ff-440d-bc4a-b0c2a9368f26
STEP: Creating a pod to test consume configMaps
May  4 12:00:49.619: INFO: Waiting up to 5m0s for pod "pod-configmaps-7ab178ac-5270-41d7-b0a5-b538d14ab62c" in namespace "configmap-7284" to be "Succeeded or Failed"
May  4 12:00:49.655: INFO: Pod "pod-configmaps-7ab178ac-5270-41d7-b0a5-b538d14ab62c": Phase="Pending", Reason="", readiness=false. Elapsed: 35.552375ms
May  4 12:00:51.698: INFO: Pod "pod-configmaps-7ab178ac-5270-41d7-b0a5-b538d14ab62c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079091902s
May  4 12:00:53.704: INFO: Pod "pod-configmaps-7ab178ac-5270-41d7-b0a5-b538d14ab62c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084941761s
STEP: Saw pod success
May  4 12:00:53.704: INFO: Pod "pod-configmaps-7ab178ac-5270-41d7-b0a5-b538d14ab62c" satisfied condition "Succeeded or Failed"
May  4 12:00:53.728: INFO: Trying to get logs from node kali-worker pod pod-configmaps-7ab178ac-5270-41d7-b0a5-b538d14ab62c container configmap-volume-test: 
STEP: delete the pod
May  4 12:00:53.772: INFO: Waiting for pod pod-configmaps-7ab178ac-5270-41d7-b0a5-b538d14ab62c to disappear
May  4 12:00:53.781: INFO: Pod pod-configmaps-7ab178ac-5270-41d7-b0a5-b538d14ab62c no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:00:53.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7284" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3307,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:00:53.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 12:00:54.106: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-73be1ee3-d0e2-4e4e-bdae-1243d486a0ea" in namespace "security-context-test-1000" to be "Succeeded or Failed"
May  4 12:00:54.125: INFO: Pod "busybox-privileged-false-73be1ee3-d0e2-4e4e-bdae-1243d486a0ea": Phase="Pending", Reason="", readiness=false. Elapsed: 18.718285ms
May  4 12:00:56.144: INFO: Pod "busybox-privileged-false-73be1ee3-d0e2-4e4e-bdae-1243d486a0ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037960174s
May  4 12:00:58.149: INFO: Pod "busybox-privileged-false-73be1ee3-d0e2-4e4e-bdae-1243d486a0ea": Phase="Running", Reason="", readiness=true. Elapsed: 4.042169413s
May  4 12:01:00.151: INFO: Pod "busybox-privileged-false-73be1ee3-d0e2-4e4e-bdae-1243d486a0ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044975393s
May  4 12:01:00.151: INFO: Pod "busybox-privileged-false-73be1ee3-d0e2-4e4e-bdae-1243d486a0ea" satisfied condition "Succeeded or Failed"
May  4 12:01:00.167: INFO: Got logs for pod "busybox-privileged-false-73be1ee3-d0e2-4e4e-bdae-1243d486a0ea": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:01:00.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-1000" for this suite.

• [SLOW TEST:6.384 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":191,"skipped":3336,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:01:00.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0504 12:01:40.770420       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  4 12:01:40.770: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:01:40.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1137" for this suite.

• [SLOW TEST:40.604 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":192,"skipped":3353,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:01:40.780: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-53b2fcb5-01bb-4507-b972-6eff22fe1b26
STEP: Creating a pod to test consume secrets
May  4 12:01:40.904: INFO: Waiting up to 5m0s for pod "pod-secrets-276da244-e966-4a8e-9604-6aa0db0d107d" in namespace "secrets-7338" to be "Succeeded or Failed"
May  4 12:01:40.908: INFO: Pod "pod-secrets-276da244-e966-4a8e-9604-6aa0db0d107d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.214295ms
May  4 12:01:42.939: INFO: Pod "pod-secrets-276da244-e966-4a8e-9604-6aa0db0d107d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034296246s
May  4 12:01:44.942: INFO: Pod "pod-secrets-276da244-e966-4a8e-9604-6aa0db0d107d": Phase="Running", Reason="", readiness=true. Elapsed: 4.037489702s
May  4 12:01:47.006: INFO: Pod "pod-secrets-276da244-e966-4a8e-9604-6aa0db0d107d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.101514358s
STEP: Saw pod success
May  4 12:01:47.006: INFO: Pod "pod-secrets-276da244-e966-4a8e-9604-6aa0db0d107d" satisfied condition "Succeeded or Failed"
May  4 12:01:47.023: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-276da244-e966-4a8e-9604-6aa0db0d107d container secret-volume-test: 
STEP: delete the pod
May  4 12:01:47.516: INFO: Waiting for pod pod-secrets-276da244-e966-4a8e-9604-6aa0db0d107d to disappear
May  4 12:01:47.560: INFO: Pod pod-secrets-276da244-e966-4a8e-9604-6aa0db0d107d no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:01:47.560: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7338" for this suite.

• [SLOW TEST:6.813 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":193,"skipped":3358,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:01:47.593: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9989.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-9989.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9989.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-9989.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-9989.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9989.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  4 12:01:56.328: INFO: DNS probes using dns-9989/dns-test-e7570e1c-2230-4e6f-9254-e2a54c6c707f succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:01:56.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9989" for this suite.

• [SLOW TEST:8.916 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":194,"skipped":3372,"failed":0}
SS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:01:56.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 12:01:56.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:02:00.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5580" for this suite.
•{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":195,"skipped":3374,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:02:00.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-8963
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8963 to expose endpoints map[]
May  4 12:02:01.072: INFO: Get endpoints failed (8.804563ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
May  4 12:02:02.075: INFO: successfully validated that service multi-endpoint-test in namespace services-8963 exposes endpoints map[] (1.012672984s elapsed)
STEP: Creating pod pod1 in namespace services-8963
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8963 to expose endpoints map[pod1:[100]]
May  4 12:02:05.220: INFO: successfully validated that service multi-endpoint-test in namespace services-8963 exposes endpoints map[pod1:[100]] (3.137461258s elapsed)
STEP: Creating pod pod2 in namespace services-8963
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8963 to expose endpoints map[pod1:[100] pod2:[101]]
May  4 12:02:08.368: INFO: successfully validated that service multi-endpoint-test in namespace services-8963 exposes endpoints map[pod1:[100] pod2:[101]] (3.142448103s elapsed)
STEP: Deleting pod pod1 in namespace services-8963
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8963 to expose endpoints map[pod2:[101]]
May  4 12:02:09.420: INFO: successfully validated that service multi-endpoint-test in namespace services-8963 exposes endpoints map[pod2:[101]] (1.046916668s elapsed)
STEP: Deleting pod pod2 in namespace services-8963
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8963 to expose endpoints map[]
May  4 12:02:10.450: INFO: successfully validated that service multi-endpoint-test in namespace services-8963 exposes endpoints map[] (1.025290158s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:02:10.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8963" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:9.683 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":196,"skipped":3387,"failed":0}
S
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:02:10.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
May  4 12:02:10.833: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5653 /api/v1/namespaces/watch-5653/configmaps/e2e-watch-test-configmap-a 5a879e96-913b-4081-ad8d-6dddd0a06484 1435189 0 2020-05-04 12:02:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-04 12:02:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May  4 12:02:10.833: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5653 /api/v1/namespaces/watch-5653/configmaps/e2e-watch-test-configmap-a 5a879e96-913b-4081-ad8d-6dddd0a06484 1435189 0 2020-05-04 12:02:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-04 12:02:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
May  4 12:02:20.857: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5653 /api/v1/namespaces/watch-5653/configmaps/e2e-watch-test-configmap-a 5a879e96-913b-4081-ad8d-6dddd0a06484 1435237 0 2020-05-04 12:02:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-04 12:02:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
May  4 12:02:20.857: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5653 /api/v1/namespaces/watch-5653/configmaps/e2e-watch-test-configmap-a 5a879e96-913b-4081-ad8d-6dddd0a06484 1435237 0 2020-05-04 12:02:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-04 12:02:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
May  4 12:02:30.867: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5653 /api/v1/namespaces/watch-5653/configmaps/e2e-watch-test-configmap-a 5a879e96-913b-4081-ad8d-6dddd0a06484 1435270 0 2020-05-04 12:02:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-04 12:02:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May  4 12:02:30.867: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5653 /api/v1/namespaces/watch-5653/configmaps/e2e-watch-test-configmap-a 5a879e96-913b-4081-ad8d-6dddd0a06484 1435270 0 2020-05-04 12:02:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-04 12:02:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
May  4 12:02:40.949: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5653 /api/v1/namespaces/watch-5653/configmaps/e2e-watch-test-configmap-a 5a879e96-913b-4081-ad8d-6dddd0a06484 1435303 0 2020-05-04 12:02:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-04 12:02:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May  4 12:02:40.949: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-5653 /api/v1/namespaces/watch-5653/configmaps/e2e-watch-test-configmap-a 5a879e96-913b-4081-ad8d-6dddd0a06484 1435303 0 2020-05-04 12:02:10 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-05-04 12:02:30 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
May  4 12:02:50.957: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5653 /api/v1/namespaces/watch-5653/configmaps/e2e-watch-test-configmap-b 9d549576-ccb6-4a34-a0ce-7eef303caa83 1435335 0 2020-05-04 12:02:50 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-04 12:02:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May  4 12:02:50.957: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5653 /api/v1/namespaces/watch-5653/configmaps/e2e-watch-test-configmap-b 9d549576-ccb6-4a34-a0ce-7eef303caa83 1435335 0 2020-05-04 12:02:50 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-04 12:02:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
May  4 12:03:00.964: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5653 /api/v1/namespaces/watch-5653/configmaps/e2e-watch-test-configmap-b 9d549576-ccb6-4a34-a0ce-7eef303caa83 1435365 0 2020-05-04 12:02:50 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-04 12:02:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May  4 12:03:00.964: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-5653 /api/v1/namespaces/watch-5653/configmaps/e2e-watch-test-configmap-b 9d549576-ccb6-4a34-a0ce-7eef303caa83 1435365 0 2020-05-04 12:02:50 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-05-04 12:02:50 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:03:10.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5653" for this suite.

• [SLOW TEST:60.378 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":197,"skipped":3388,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:03:10.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  4 12:03:11.058: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e1c81fa-62e8-4c8c-8c60-376a0e23653d" in namespace "downward-api-8384" to be "Succeeded or Failed"
May  4 12:03:11.096: INFO: Pod "downwardapi-volume-0e1c81fa-62e8-4c8c-8c60-376a0e23653d": Phase="Pending", Reason="", readiness=false. Elapsed: 37.766245ms
May  4 12:03:13.100: INFO: Pod "downwardapi-volume-0e1c81fa-62e8-4c8c-8c60-376a0e23653d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042044238s
May  4 12:03:15.105: INFO: Pod "downwardapi-volume-0e1c81fa-62e8-4c8c-8c60-376a0e23653d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046844084s
STEP: Saw pod success
May  4 12:03:15.105: INFO: Pod "downwardapi-volume-0e1c81fa-62e8-4c8c-8c60-376a0e23653d" satisfied condition "Succeeded or Failed"
May  4 12:03:15.109: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-0e1c81fa-62e8-4c8c-8c60-376a0e23653d container client-container: 
STEP: delete the pod
May  4 12:03:15.145: INFO: Waiting for pod downwardapi-volume-0e1c81fa-62e8-4c8c-8c60-376a0e23653d to disappear
May  4 12:03:15.158: INFO: Pod downwardapi-volume-0e1c81fa-62e8-4c8c-8c60-376a0e23653d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:03:15.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8384" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":198,"skipped":3403,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:03:15.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
May  4 12:03:15.248: INFO: >>> kubeConfig: /root/.kube/config
May  4 12:03:17.191: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:03:27.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3796" for this suite.

• [SLOW TEST:12.706 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":199,"skipped":3408,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:03:27.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
May  4 12:03:35.994: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  4 12:03:35.997: INFO: Pod pod-with-prestop-http-hook still exists
May  4 12:03:37.997: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  4 12:03:38.002: INFO: Pod pod-with-prestop-http-hook still exists
May  4 12:03:39.997: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  4 12:03:40.001: INFO: Pod pod-with-prestop-http-hook still exists
May  4 12:03:41.997: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  4 12:03:42.007: INFO: Pod pod-with-prestop-http-hook still exists
May  4 12:03:43.997: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
May  4 12:03:44.001: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:03:44.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7906" for this suite.

• [SLOW TEST:16.154 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3440,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:03:44.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288
STEP: creating an pod
May  4 12:03:44.074: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-8923 -- logs-generator --log-lines-total 100 --run-duration 20s'
May  4 12:03:44.191: INFO: stderr: ""
May  4 12:03:44.191: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Waiting for log generator to start.
May  4 12:03:44.191: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
May  4 12:03:44.191: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8923" to be "running and ready, or succeeded"
May  4 12:03:44.213: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 22.298038ms
May  4 12:03:46.276: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085278757s
May  4 12:03:48.281: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.08971356s
May  4 12:03:48.281: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
May  4 12:03:48.281: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
May  4 12:03:48.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8923'
May  4 12:03:48.406: INFO: stderr: ""
May  4 12:03:48.406: INFO: stdout: "I0504 12:03:46.542941       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/552t 490\nI0504 12:03:46.743108       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/8w9 544\nI0504 12:03:46.943127       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/b6g 365\nI0504 12:03:47.143184       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/qvc 223\nI0504 12:03:47.343118       1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/6h5w 241\nI0504 12:03:47.543104       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/2vk 283\nI0504 12:03:47.743139       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/ztks 388\nI0504 12:03:47.943162       1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/lzm 323\nI0504 12:03:48.143183       1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/d2k2 244\nI0504 12:03:48.343115       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/ljf 536\n"
STEP: limiting log lines
May  4 12:03:48.406: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8923 --tail=1'
May  4 12:03:48.513: INFO: stderr: ""
May  4 12:03:48.513: INFO: stdout: "I0504 12:03:48.343115       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/ljf 536\n"
May  4 12:03:48.513: INFO: got output "I0504 12:03:48.343115       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/ljf 536\n"
STEP: limiting log bytes
May  4 12:03:48.513: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8923 --limit-bytes=1'
May  4 12:03:48.625: INFO: stderr: ""
May  4 12:03:48.625: INFO: stdout: "I"
May  4 12:03:48.625: INFO: got output "I"
STEP: exposing timestamps
May  4 12:03:48.626: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8923 --tail=1 --timestamps'
May  4 12:03:48.735: INFO: stderr: ""
May  4 12:03:48.735: INFO: stdout: "2020-05-04T12:03:48.543331057Z I0504 12:03:48.543143       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/7crk 378\n"
May  4 12:03:48.735: INFO: got output "2020-05-04T12:03:48.543331057Z I0504 12:03:48.543143       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/7crk 378\n"
STEP: restricting to a time range
May  4 12:03:51.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8923 --since=1s'
May  4 12:03:51.348: INFO: stderr: ""
May  4 12:03:51.348: INFO: stdout: "I0504 12:03:50.543175       1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/h87 294\nI0504 12:03:50.743082       1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/66p 540\nI0504 12:03:50.943140       1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/xw2w 573\nI0504 12:03:51.143127       1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/hsh 244\nI0504 12:03:51.343135       1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/jn6z 522\n"
May  4 12:03:51.348: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8923 --since=24h'
May  4 12:03:51.468: INFO: stderr: ""
May  4 12:03:51.468: INFO: stdout: "I0504 12:03:46.542941       1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/552t 490\nI0504 12:03:46.743108       1 logs_generator.go:76] 1 PUT /api/v1/namespaces/ns/pods/8w9 544\nI0504 12:03:46.943127       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/b6g 365\nI0504 12:03:47.143184       1 logs_generator.go:76] 3 POST /api/v1/namespaces/ns/pods/qvc 223\nI0504 12:03:47.343118       1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/6h5w 241\nI0504 12:03:47.543104       1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/2vk 283\nI0504 12:03:47.743139       1 logs_generator.go:76] 6 PUT /api/v1/namespaces/default/pods/ztks 388\nI0504 12:03:47.943162       1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/lzm 323\nI0504 12:03:48.143183       1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/d2k2 244\nI0504 12:03:48.343115       1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/ljf 536\nI0504 12:03:48.543143       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/default/pods/7crk 378\nI0504 12:03:48.743170       1 logs_generator.go:76] 11 PUT /api/v1/namespaces/ns/pods/j9c 429\nI0504 12:03:48.943110       1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/vsz 225\nI0504 12:03:49.143171       1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/zxx 402\nI0504 12:03:49.343125       1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/6jzx 568\nI0504 12:03:49.543135       1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/5v8 537\nI0504 12:03:49.743175       1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/hvzr 210\nI0504 12:03:49.943147       1 logs_generator.go:76] 17 POST /api/v1/namespaces/ns/pods/zhl 551\nI0504 12:03:50.143192       1 logs_generator.go:76] 18 POST /api/v1/namespaces/default/pods/8b8 243\nI0504 12:03:50.343082       1 logs_generator.go:76] 19 POST /api/v1/namespaces/kube-system/pods/8c7f 211\nI0504 12:03:50.543175       1 logs_generator.go:76] 20 GET /api/v1/namespaces/kube-system/pods/h87 294\nI0504 12:03:50.743082       1 logs_generator.go:76] 21 GET /api/v1/namespaces/default/pods/66p 540\nI0504 12:03:50.943140       1 logs_generator.go:76] 22 GET /api/v1/namespaces/ns/pods/xw2w 573\nI0504 12:03:51.143127       1 logs_generator.go:76] 23 GET /api/v1/namespaces/ns/pods/hsh 244\nI0504 12:03:51.343135       1 logs_generator.go:76] 24 GET /api/v1/namespaces/kube-system/pods/jn6z 522\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294
May  4 12:03:51.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8923'
May  4 12:03:54.297: INFO: stderr: ""
May  4 12:03:54.297: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:03:54.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8923" for this suite.

• [SLOW TEST:10.276 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":275,"completed":201,"skipped":3462,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:03:54.305: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 12:03:54.381: INFO: Creating deployment "webserver-deployment"
May  4 12:03:54.386: INFO: Waiting for observed generation 1
May  4 12:03:56.594: INFO: Waiting for all required pods to come up
May  4 12:03:56.599: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
May  4 12:04:06.783: INFO: Waiting for deployment "webserver-deployment" to complete
May  4 12:04:06.788: INFO: Updating deployment "webserver-deployment" with a non-existent image
May  4 12:04:06.793: INFO: Updating deployment webserver-deployment
May  4 12:04:06.793: INFO: Waiting for observed generation 2
May  4 12:04:08.848: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
May  4 12:04:08.850: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
May  4 12:04:08.852: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
May  4 12:04:08.859: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
May  4 12:04:08.859: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
May  4 12:04:08.861: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
May  4 12:04:08.865: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
May  4 12:04:08.865: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
May  4 12:04:08.872: INFO: Updating deployment webserver-deployment
May  4 12:04:08.872: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
May  4 12:04:09.602: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
May  4 12:04:09.903: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
May  4 12:04:12.533: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-8378 /apis/apps/v1/namespaces/deployment-8378/deployments/webserver-deployment 55f715e9-ec78-487b-ae82-408a1386d52d 1435930 3 2020-05-04 12:03:54 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-05-04 12:04:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005abe708  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-04 12:04:09 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-05-04 12:04:10 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

May  4 12:04:12.676: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-8378 /apis/apps/v1/namespaces/deployment-8378/replicasets/webserver-deployment-6676bcd6d4 7538a9c8-b5b1-4a74-87eb-3649b4d58841 1435926 3 2020-05-04 12:04:06 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 55f715e9-ec78-487b-ae82-408a1386d52d 0xc005abeb97 0xc005abeb98}] []  [{kube-controller-manager Update apps/v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 53 102 55 49 53 101 57 45 101 99 55 56 45 52 56 55 98 45 97 101 56 50 45 52 48 56 97 49 51 56 54 100 53 50 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005abec18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
May  4 12:04:12.676: INFO: All old ReplicaSets of Deployment "webserver-deployment":
May  4 12:04:12.676: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-8378 /apis/apps/v1/namespaces/deployment-8378/replicasets/webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 1435912 3 2020-05-04 12:03:54 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 55f715e9-ec78-487b-ae82-408a1386d52d 0xc005abeca7 0xc005abeca8}] []  [{kube-controller-manager Update apps/v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 53 102 55 49 53 101 57 45 101 99 55 56 45 52 56 55 98 45 97 101 56 50 45 52 48 56 97 49 51 56 54 100 53 50 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005abed18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
May  4 12:04:12.706: INFO: Pod "webserver-deployment-6676bcd6d4-2xdh2" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2xdh2 webserver-deployment-6676bcd6d4- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-6676bcd6d4-2xdh2 3025930f-0665-4f1b-a1e0-972b9b36bfbf 1435986 0 2020-05-04 12:04:06 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7538a9c8-b5b1-4a74-87eb-3649b4d58841 0xc005ac4d87 0xc005ac4d88}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 53 51 56 97 57 99 56 45 98 53 98 49 45 52 97 55 52 45 56 55 101 98 45 51 54 52 57 98 52 100 53 56 56 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:12 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 53 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.159,StartTime:2020-05-04 12:04:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.159,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.706: INFO: Pod "webserver-deployment-6676bcd6d4-2xh6t" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-2xh6t webserver-deployment-6676bcd6d4- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-6676bcd6d4-2xh6t c4eaeccc-22d1-4ade-87be-dce566b7c1cb 1435837 0 2020-05-04 12:04:06 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7538a9c8-b5b1-4a74-87eb-3649b4d58841 0xc005ac4f77 0xc005ac4f78}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 53 51 56 97 57 99 56 45 98 53 98 49 45 52 97 55 52 45 56 55 101 98 45 51 54 52 57 98 52 100 53 56 56 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:07 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-04 12:04:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.707: INFO: Pod "webserver-deployment-6676bcd6d4-9dktx" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-9dktx webserver-deployment-6676bcd6d4- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-6676bcd6d4-9dktx fbf38828-08c1-477d-bf2e-5089cbed75ab 1435954 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7538a9c8-b5b1-4a74-87eb-3649b4d58841 0xc005ac5127 0xc005ac5128}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 53 51 56 97 57 99 56 45 98 53 98 49 45 52 97 55 52 45 56 55 101 98 45 51 54 52 57 98 52 100 53 56 56 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.707: INFO: Pod "webserver-deployment-6676bcd6d4-hjxhk" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hjxhk webserver-deployment-6676bcd6d4- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-6676bcd6d4-hjxhk 6f2aaa29-09a0-4c5e-bfc7-9894311486b3 1435971 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7538a9c8-b5b1-4a74-87eb-3649b4d58841 0xc005ac52d7 0xc005ac52d8}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 53 51 56 97 57 99 56 45 98 53 98 49 45 52 97 55 52 45 56 55 101 98 45 51 54 52 57 98 52 100 53 56 56 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.708: INFO: Pod "webserver-deployment-6676bcd6d4-j6pkg" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-j6pkg webserver-deployment-6676bcd6d4- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-6676bcd6d4-j6pkg 50f48cc6-231f-4317-9814-ad19b7b8472e 1435915 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7538a9c8-b5b1-4a74-87eb-3649b4d58841 0xc005ac5487 0xc005ac5488}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 53 51 56 97 57 99 56 45 98 53 98 49 45 52 97 55 52 45 56 55 101 98 45 51 54 52 57 98 52 100 53 56 56 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-04 12:04:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.709: INFO: Pod "webserver-deployment-6676bcd6d4-mdtsv" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-mdtsv webserver-deployment-6676bcd6d4- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-6676bcd6d4-mdtsv 32504cd1-5bc7-45be-82f2-f85fe90d94a3 1435845 0 2020-05-04 12:04:07 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7538a9c8-b5b1-4a74-87eb-3649b4d58841 0xc005ac5637 0xc005ac5638}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:07 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 53 51 56 97 57 99 56 45 98 53 98 49 45 52 97 55 52 45 56 55 101 98 45 51 54 52 57 98 52 100 53 56 56 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:07 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-04 12:04:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.709: INFO: Pod "webserver-deployment-6676bcd6d4-nlsqj" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nlsqj webserver-deployment-6676bcd6d4- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-6676bcd6d4-nlsqj 315db45b-f4bc-4597-a4ff-6142efb99e2b 1435980 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7538a9c8-b5b1-4a74-87eb-3649b4d58841 0xc005ac57e7 0xc005ac57e8}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 53 51 56 97 57 99 56 45 98 53 98 49 45 52 97 55 52 45 56 55 101 98 45 51 54 52 57 98 52 100 53 56 56 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:12 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.710: INFO: Pod "webserver-deployment-6676bcd6d4-nxkzf" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-nxkzf webserver-deployment-6676bcd6d4- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-6676bcd6d4-nxkzf cad9e1d3-acb1-41ae-ad95-65ec09fe8f49 1435929 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7538a9c8-b5b1-4a74-87eb-3649b4d58841 0xc005ac5997 0xc005ac5998}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 53 51 56 97 57 99 56 45 98 53 98 49 45 52 97 55 52 45 56 55 101 98 45 51 54 52 57 98 52 100 53 56 56 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.710: INFO: Pod "webserver-deployment-6676bcd6d4-q4z2r" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-q4z2r webserver-deployment-6676bcd6d4- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-6676bcd6d4-q4z2r 73505496-aa76-44e9-8f23-aecf86753429 1435981 0 2020-05-04 12:04:06 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7538a9c8-b5b1-4a74-87eb-3649b4d58841 0xc005ac5b47 0xc005ac5b48}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 53 51 56 97 57 99 56 45 98 53 98 49 45 52 97 55 52 45 56 55 101 98 45 51 54 52 57 98 52 100 53 56 56 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:12 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 50 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.222,StartTime:2020-05-04 12:04:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.222,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.711: INFO: Pod "webserver-deployment-6676bcd6d4-qmcmq" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qmcmq webserver-deployment-6676bcd6d4- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-6676bcd6d4-qmcmq 80b49a25-45be-48d1-a17e-c0b4943929de 1435834 0 2020-05-04 12:04:06 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7538a9c8-b5b1-4a74-87eb-3649b4d58841 0xc005ac5d67 0xc005ac5d68}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 53 51 56 97 57 99 56 45 98 53 98 49 45 52 97 55 52 45 56 55 101 98 45 51 54 52 57 98 52 100 53 56 56 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:07 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:06 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-04 12:04:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.711: INFO: Pod "webserver-deployment-6676bcd6d4-tt6h7" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-tt6h7 webserver-deployment-6676bcd6d4- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-6676bcd6d4-tt6h7 98b5fb49-b8ca-4295-bfc5-bdeba9fc0d4e 1435956 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7538a9c8-b5b1-4a74-87eb-3649b4d58841 0xc005ac5f57 0xc005ac5f58}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 53 51 56 97 57 99 56 45 98 53 98 49 45 52 97 55 52 45 56 55 101 98 45 51 54 52 57 98 52 100 53 56 56 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.712: INFO: Pod "webserver-deployment-6676bcd6d4-whgw6" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-whgw6 webserver-deployment-6676bcd6d4- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-6676bcd6d4-whgw6 b4adf18c-ce2f-48db-8939-d7eed21b25aa 1435939 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7538a9c8-b5b1-4a74-87eb-3649b4d58841 0xc005f461a7 0xc005f461a8}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 53 51 56 97 57 99 56 45 98 53 98 49 45 52 97 55 52 45 56 55 101 98 45 51 54 52 57 98 52 100 53 56 56 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.712: INFO: Pod "webserver-deployment-6676bcd6d4-x5rsc" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-x5rsc webserver-deployment-6676bcd6d4- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-6676bcd6d4-x5rsc 6f0a87b2-e94e-4d9f-989f-8de8c54a2e54 1435964 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 7538a9c8-b5b1-4a74-87eb-3649b4d58841 0xc005f463a7 0xc005f463a8}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 53 51 56 97 57 99 56 45 98 53 98 49 45 52 97 55 52 45 56 55 101 98 45 51 54 52 57 98 52 100 53 56 56 52 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.713: INFO: Pod "webserver-deployment-84855cf797-242s5" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-242s5 webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-242s5 649b8600-53ca-41d1-8cf9-0c2688e247b0 1435788 0 2020-05-04 12:03:54 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f465a7 0xc005f465a8}] []  [{kube-controller-manager Update v1 2020-05-04 12:03:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 50 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.220,StartTime:2020-05-04 12:03:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-04 12:04:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8b53fae5bbee962c75f38e0844ba7054913f5b9779fdaa7cb3f29a37601b5516,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.220,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.713: INFO: Pod "webserver-deployment-84855cf797-467n9" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-467n9 webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-467n9 51b749e7-bf7a-4e84-9651-b14beadfaf78 1435949 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f46807 0xc005f46808}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.714: INFO: Pod "webserver-deployment-84855cf797-6hg87" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-6hg87 webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-6hg87 edd9ce5b-22a2-41ec-82ee-86c3e8480c1d 1435780 0 2020-05-04 12:03:54 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f469d7 0xc005f469d8}] []  [{kube-controller-manager Update v1 2020-05-04 12:03:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 53 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.158,StartTime:2020-05-04 12:03:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-04 12:04:05 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://185b4e41e2e860746c5eb9f4bde651fbbe86e5b4fa12edd4781522061310cecd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.158,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.714: INFO: Pod "webserver-deployment-84855cf797-8sqzr" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-8sqzr webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-8sqzr 730a4443-d7cd-4632-a411-c76d0ee61dac 1435979 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f46be7 0xc005f46be8}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:12 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.715: INFO: Pod "webserver-deployment-84855cf797-97fw8" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-97fw8 webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-97fw8 e794a777-4a1a-4084-964c-4e6aca0db403 1435771 0 2020-05-04 12:03:54 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f46da7 0xc005f46da8}] []  [{kube-controller-manager Update v1 2020-05-04 12:03:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 53 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.157,StartTime:2020-05-04 12:03:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-04 12:04:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a1d37384eac58a33338b1d2104b4d29f60f6a8fc82c9d7fb1fff22a44fb09a15,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.157,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.715: INFO: Pod "webserver-deployment-84855cf797-9x54f" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-9x54f webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-9x54f e85076be-66cb-4e5b-8f1f-fe379214f9b6 1435974 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f46f57 0xc005f46f58}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.715: INFO: Pod "webserver-deployment-84855cf797-cgm94" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-cgm94 webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-cgm94 a49acd0e-cd01-46a2-9040-ebb011ebd6ca 1435960 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f47187 0xc005f47188}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.716: INFO: Pod "webserver-deployment-84855cf797-ch6m7" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-ch6m7 webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-ch6m7 7f3dd0e7-9dfd-4503-a995-a5a89f997275 1435728 0 2020-05-04 12:03:54 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f47357 0xc005f47358}] []  [{kube-controller-manager Update v1 2020-05-04 12:03:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:01 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 53 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.154,StartTime:2020-05-04 12:03:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-04 12:03:58 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://b039bca6a09edfd2e18e5c789018fd4274bf8de71c989cf172de2673cfcfde17,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.154,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.716: INFO: Pod "webserver-deployment-84855cf797-h4h8s" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-h4h8s webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-h4h8s 47428ea9-c9b6-4217-b875-a8ebf32967af 1435948 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f47517 0xc005f47518}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.716: INFO: Pod "webserver-deployment-84855cf797-h958s" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-h958s webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-h958s 6a1d3e83-7bb2-4506-a0ed-c0086b4325aa 1435920 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f476c7 0xc005f476c8}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-04 12:04:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.716: INFO: Pod "webserver-deployment-84855cf797-jqlpw" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-jqlpw webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-jqlpw 3af8aae1-e1c5-48ed-bfb8-c1952f00aa85 1435910 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f47887 0xc005f47888}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:,StartTime:2020-05-04 12:04:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.716: INFO: Pod "webserver-deployment-84855cf797-kcwx9" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-kcwx9 webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-kcwx9 1835c2bb-57f5-4388-8015-70ce647a8457 1435761 0 2020-05-04 12:03:54 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f47a67 0xc005f47a68}] []  [{kube-controller-manager Update v1 2020-05-04 12:03:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 53 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.156,StartTime:2020-05-04 12:03:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-04 12:04:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a556a59f6b8c008162fad31a0afad37e74a419b6f067ad17e03f87b0120bba7e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.156,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.717: INFO: Pod "webserver-deployment-84855cf797-mnrzq" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-mnrzq webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-mnrzq 8e093f48-09ba-4250-a90d-a6bfd4044dba 1435922 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f47c47 0xc005f47c48}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.717: INFO: Pod "webserver-deployment-84855cf797-pfgdl" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-pfgdl webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-pfgdl b9d96164-6965-4d7f-b247-4885a7d5f9b4 1435969 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f47e27 0xc005f47e28}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.717: INFO: Pod "webserver-deployment-84855cf797-swr7s" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-swr7s webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-swr7s 3b0595fc-2fd4-4ef6-ae2e-747c3617ae7e 1435940 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005f47ff7 0xc005f47ff8}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.719: INFO: Pod "webserver-deployment-84855cf797-szmzf" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-szmzf webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-szmzf 92831500-fa05-444a-b066-ee79be84598a 1435933 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc0054941b7 0xc0054941b8}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:10 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.719: INFO: Pod "webserver-deployment-84855cf797-tq9v5" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-tq9v5 webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-tq9v5 8f12c889-c866-4f06-880d-c0408b984e08 1435762 0 2020-05-04 12:03:54 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005494387 0xc005494388}] []  [{kube-controller-manager Update v1 2020-05-04 12:03:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 49 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.217,StartTime:2020-05-04 12:03:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-04 12:04:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://4009c5f3f50d959191860c33830034de26b2a3a57f1727ce5f358a2808bb9faf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.217,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.720: INFO: Pod "webserver-deployment-84855cf797-vb9zv" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-vb9zv webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-vb9zv 7f714231-882e-406c-9ded-0146585bdea5 1435755 0 2020-05-04 12:03:54 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005494577 0xc005494578}] []  [{kube-controller-manager Update v1 2020-05-04 12:03:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:04 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 49 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.15,PodIP:10.244.2.218,StartTime:2020-05-04 12:03:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-04 12:04:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a6d81ebc03dd070e90c83f0ec675c34c7f108b7011a9864137568ea6a837249a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.218,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.720: INFO: Pod "webserver-deployment-84855cf797-wf6mj" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-wf6mj webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-wf6mj dbf07347-57f1-4ff0-901a-c81ed5b7d77d 1435754 0 2020-05-04 12:03:54 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc005494727 0xc005494728}] []  [{kube-controller-manager Update v1 2020-05-04 12:03:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:04 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 53 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:03:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:10.244.1.155,StartTime:2020-05-04 12:03:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-04 12:04:02 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0fafca77565a08529ea183e34c50f0dae39eb804750e374e26275f7f927fc893,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.155,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
May  4 12:04:12.720: INFO: Pod "webserver-deployment-84855cf797-xp6rm" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-xp6rm webserver-deployment-84855cf797- deployment-8378 /api/v1/namespaces/deployment-8378/pods/webserver-deployment-84855cf797-xp6rm 493568fe-53bb-47fb-8371-91832701d54c 1435973 0 2020-05-04 12:04:09 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 67521975-3b48-4fbc-945f-06a6d8896bd1 0xc0054948d7 0xc0054948d8}] []  [{kube-controller-manager Update v1 2020-05-04 12:04:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 55 53 50 49 57 55 53 45 51 98 52 56 45 52 102 98 99 45 57 52 53 102 45 48 54 97 54 100 56 56 57 54 98 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-05-04 12:04:11 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-477s8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-477s8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-477s8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-04 12:04:09 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.18,PodIP:,StartTime:2020-05-04 12:04:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:04:12.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8378" for this suite.

• [SLOW TEST:18.898 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":202,"skipped":3471,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:04:13.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:04:50.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-1396" for this suite.

• [SLOW TEST:37.631 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":203,"skipped":3511,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:04:50.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
May  4 12:04:50.989: INFO: Waiting up to 5m0s for pod "pod-f7637f49-ac3e-442c-93ca-2de0b9e41d99" in namespace "emptydir-7818" to be "Succeeded or Failed"
May  4 12:04:51.080: INFO: Pod "pod-f7637f49-ac3e-442c-93ca-2de0b9e41d99": Phase="Pending", Reason="", readiness=false. Elapsed: 90.088424ms
May  4 12:04:53.084: INFO: Pod "pod-f7637f49-ac3e-442c-93ca-2de0b9e41d99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094071243s
May  4 12:04:55.088: INFO: Pod "pod-f7637f49-ac3e-442c-93ca-2de0b9e41d99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098702797s
STEP: Saw pod success
May  4 12:04:55.088: INFO: Pod "pod-f7637f49-ac3e-442c-93ca-2de0b9e41d99" satisfied condition "Succeeded or Failed"
May  4 12:04:55.091: INFO: Trying to get logs from node kali-worker pod pod-f7637f49-ac3e-442c-93ca-2de0b9e41d99 container test-container: 
STEP: delete the pod
May  4 12:04:55.206: INFO: Waiting for pod pod-f7637f49-ac3e-442c-93ca-2de0b9e41d99 to disappear
May  4 12:04:55.209: INFO: Pod pod-f7637f49-ac3e-442c-93ca-2de0b9e41d99 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:04:55.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7818" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":204,"skipped":3525,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:04:55.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-de629c5f-1dc2-4333-9d9f-7894eed25655
STEP: Creating secret with name s-test-opt-upd-c3efc6e2-f78d-4942-8c59-8e0444ada364
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-de629c5f-1dc2-4333-9d9f-7894eed25655
STEP: Updating secret s-test-opt-upd-c3efc6e2-f78d-4942-8c59-8e0444ada364
STEP: Creating secret with name s-test-opt-create-5bf61223-fc09-4961-bd85-6c59b3a13d1d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:05:03.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8498" for this suite.

• [SLOW TEST:8.248 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":205,"skipped":3545,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:05:03.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
May  4 12:05:03.561: INFO: Waiting up to 5m0s for pod "pod-32972b6c-cda2-4caf-ae24-f8356c5af167" in namespace "emptydir-3422" to be "Succeeded or Failed"
May  4 12:05:03.570: INFO: Pod "pod-32972b6c-cda2-4caf-ae24-f8356c5af167": Phase="Pending", Reason="", readiness=false. Elapsed: 8.870524ms
May  4 12:05:05.574: INFO: Pod "pod-32972b6c-cda2-4caf-ae24-f8356c5af167": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013212946s
May  4 12:05:07.579: INFO: Pod "pod-32972b6c-cda2-4caf-ae24-f8356c5af167": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017827453s
May  4 12:05:09.733: INFO: Pod "pod-32972b6c-cda2-4caf-ae24-f8356c5af167": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.17204447s
STEP: Saw pod success
May  4 12:05:09.733: INFO: Pod "pod-32972b6c-cda2-4caf-ae24-f8356c5af167" satisfied condition "Succeeded or Failed"
May  4 12:05:09.737: INFO: Trying to get logs from node kali-worker2 pod pod-32972b6c-cda2-4caf-ae24-f8356c5af167 container test-container: 
STEP: delete the pod
May  4 12:05:09.816: INFO: Waiting for pod pod-32972b6c-cda2-4caf-ae24-f8356c5af167 to disappear
May  4 12:05:09.870: INFO: Pod pod-32972b6c-cda2-4caf-ae24-f8356c5af167 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:05:09.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3422" for this suite.

• [SLOW TEST:6.404 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3552,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:05:09.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-af3ebd9b-5158-424f-8909-a52a28f1abe6
STEP: Creating a pod to test consume secrets
May  4 12:05:09.949: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2b239329-4849-4050-b023-19b3aec3f566" in namespace "projected-222" to be "Succeeded or Failed"
May  4 12:05:10.104: INFO: Pod "pod-projected-secrets-2b239329-4849-4050-b023-19b3aec3f566": Phase="Pending", Reason="", readiness=false. Elapsed: 154.994421ms
May  4 12:05:12.558: INFO: Pod "pod-projected-secrets-2b239329-4849-4050-b023-19b3aec3f566": Phase="Pending", Reason="", readiness=false. Elapsed: 2.608907832s
May  4 12:05:14.560: INFO: Pod "pod-projected-secrets-2b239329-4849-4050-b023-19b3aec3f566": Phase="Running", Reason="", readiness=true. Elapsed: 4.611461742s
May  4 12:05:16.564: INFO: Pod "pod-projected-secrets-2b239329-4849-4050-b023-19b3aec3f566": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.615549357s
STEP: Saw pod success
May  4 12:05:16.564: INFO: Pod "pod-projected-secrets-2b239329-4849-4050-b023-19b3aec3f566" satisfied condition "Succeeded or Failed"
May  4 12:05:16.568: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-2b239329-4849-4050-b023-19b3aec3f566 container projected-secret-volume-test: 
STEP: delete the pod
May  4 12:05:16.604: INFO: Waiting for pod pod-projected-secrets-2b239329-4849-4050-b023-19b3aec3f566 to disappear
May  4 12:05:16.667: INFO: Pod pod-projected-secrets-2b239329-4849-4050-b023-19b3aec3f566 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:05:16.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-222" for this suite.

• [SLOW TEST:6.795 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3584,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:05:16.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-833f084d-b9df-4825-a78c-fe47e5f1294d
STEP: Creating a pod to test consume secrets
May  4 12:05:16.841: INFO: Waiting up to 5m0s for pod "pod-secrets-b09c0545-b365-4374-b3cb-911cabeb56eb" in namespace "secrets-8665" to be "Succeeded or Failed"
May  4 12:05:16.844: INFO: Pod "pod-secrets-b09c0545-b365-4374-b3cb-911cabeb56eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.950201ms
May  4 12:05:18.848: INFO: Pod "pod-secrets-b09c0545-b365-4374-b3cb-911cabeb56eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006838986s
May  4 12:05:20.851: INFO: Pod "pod-secrets-b09c0545-b365-4374-b3cb-911cabeb56eb": Phase="Running", Reason="", readiness=true. Elapsed: 4.010262979s
May  4 12:05:22.855: INFO: Pod "pod-secrets-b09c0545-b365-4374-b3cb-911cabeb56eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01424884s
STEP: Saw pod success
May  4 12:05:22.855: INFO: Pod "pod-secrets-b09c0545-b365-4374-b3cb-911cabeb56eb" satisfied condition "Succeeded or Failed"
May  4 12:05:22.858: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-b09c0545-b365-4374-b3cb-911cabeb56eb container secret-volume-test: 
STEP: delete the pod
May  4 12:05:22.895: INFO: Waiting for pod pod-secrets-b09c0545-b365-4374-b3cb-911cabeb56eb to disappear
May  4 12:05:22.912: INFO: Pod pod-secrets-b09c0545-b365-4374-b3cb-911cabeb56eb no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:05:22.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8665" for this suite.

• [SLOW TEST:6.244 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3619,"failed":0}
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:05:22.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
May  4 12:05:23.004: INFO: Waiting up to 5m0s for pod "client-containers-08740837-b290-4ab3-86dc-d708ea127ef4" in namespace "containers-8079" to be "Succeeded or Failed"
May  4 12:05:23.008: INFO: Pod "client-containers-08740837-b290-4ab3-86dc-d708ea127ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.746076ms
May  4 12:05:25.012: INFO: Pod "client-containers-08740837-b290-4ab3-86dc-d708ea127ef4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007908569s
May  4 12:05:27.016: INFO: Pod "client-containers-08740837-b290-4ab3-86dc-d708ea127ef4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012283795s
STEP: Saw pod success
May  4 12:05:27.016: INFO: Pod "client-containers-08740837-b290-4ab3-86dc-d708ea127ef4" satisfied condition "Succeeded or Failed"
May  4 12:05:27.019: INFO: Trying to get logs from node kali-worker2 pod client-containers-08740837-b290-4ab3-86dc-d708ea127ef4 container test-container: 
STEP: delete the pod
May  4 12:05:27.039: INFO: Waiting for pod client-containers-08740837-b290-4ab3-86dc-d708ea127ef4 to disappear
May  4 12:05:27.043: INFO: Pod client-containers-08740837-b290-4ab3-86dc-d708ea127ef4 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:05:27.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8079" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":209,"skipped":3622,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:05:27.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-0036a843-691a-42e8-b5fb-927e5ff27d68
STEP: Creating a pod to test consume secrets
May  4 12:05:27.200: INFO: Waiting up to 5m0s for pod "pod-secrets-754c786b-3bc0-4fe0-b5e5-ad293b7c762d" in namespace "secrets-9821" to be "Succeeded or Failed"
May  4 12:05:27.204: INFO: Pod "pod-secrets-754c786b-3bc0-4fe0-b5e5-ad293b7c762d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.144201ms
May  4 12:05:29.236: INFO: Pod "pod-secrets-754c786b-3bc0-4fe0-b5e5-ad293b7c762d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035195131s
May  4 12:05:31.271: INFO: Pod "pod-secrets-754c786b-3bc0-4fe0-b5e5-ad293b7c762d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070859785s
STEP: Saw pod success
May  4 12:05:31.271: INFO: Pod "pod-secrets-754c786b-3bc0-4fe0-b5e5-ad293b7c762d" satisfied condition "Succeeded or Failed"
May  4 12:05:31.274: INFO: Trying to get logs from node kali-worker pod pod-secrets-754c786b-3bc0-4fe0-b5e5-ad293b7c762d container secret-env-test: 
STEP: delete the pod
May  4 12:05:31.332: INFO: Waiting for pod pod-secrets-754c786b-3bc0-4fe0-b5e5-ad293b7c762d to disappear
May  4 12:05:31.356: INFO: Pod pod-secrets-754c786b-3bc0-4fe0-b5e5-ad293b7c762d no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:05:31.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9821" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3640,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:05:31.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-e4ddf6e4-5908-4ece-a6c8-d1dc99037036
STEP: Creating a pod to test consume configMaps
May  4 12:05:31.531: INFO: Waiting up to 5m0s for pod "pod-configmaps-41b27c3c-6c86-4916-bdb7-4e88072ad928" in namespace "configmap-1240" to be "Succeeded or Failed"
May  4 12:05:31.576: INFO: Pod "pod-configmaps-41b27c3c-6c86-4916-bdb7-4e88072ad928": Phase="Pending", Reason="", readiness=false. Elapsed: 44.305931ms
May  4 12:05:33.589: INFO: Pod "pod-configmaps-41b27c3c-6c86-4916-bdb7-4e88072ad928": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057778666s
May  4 12:05:35.592: INFO: Pod "pod-configmaps-41b27c3c-6c86-4916-bdb7-4e88072ad928": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061164316s
May  4 12:05:37.597: INFO: Pod "pod-configmaps-41b27c3c-6c86-4916-bdb7-4e88072ad928": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065620106s
STEP: Saw pod success
May  4 12:05:37.597: INFO: Pod "pod-configmaps-41b27c3c-6c86-4916-bdb7-4e88072ad928" satisfied condition "Succeeded or Failed"
May  4 12:05:37.600: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-41b27c3c-6c86-4916-bdb7-4e88072ad928 container configmap-volume-test: 
STEP: delete the pod
May  4 12:05:37.638: INFO: Waiting for pod pod-configmaps-41b27c3c-6c86-4916-bdb7-4e88072ad928 to disappear
May  4 12:05:37.696: INFO: Pod pod-configmaps-41b27c3c-6c86-4916-bdb7-4e88072ad928 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:05:37.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1240" for this suite.

• [SLOW TEST:6.401 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":211,"skipped":3667,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:05:37.767: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:06:37.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2179" for this suite.

• [SLOW TEST:60.145 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3685,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:06:37.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-ddee1388-8db8-4a69-bbfa-4143ae81d590
STEP: Creating a pod to test consume secrets
May  4 12:06:38.004: INFO: Waiting up to 5m0s for pod "pod-secrets-d7bf82a9-4715-454f-bc94-5b0f2a95fe3b" in namespace "secrets-9986" to be "Succeeded or Failed"
May  4 12:06:38.038: INFO: Pod "pod-secrets-d7bf82a9-4715-454f-bc94-5b0f2a95fe3b": Phase="Pending", Reason="", readiness=false. Elapsed: 34.605876ms
May  4 12:06:40.043: INFO: Pod "pod-secrets-d7bf82a9-4715-454f-bc94-5b0f2a95fe3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039181742s
May  4 12:06:42.056: INFO: Pod "pod-secrets-d7bf82a9-4715-454f-bc94-5b0f2a95fe3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052588228s
STEP: Saw pod success
May  4 12:06:42.056: INFO: Pod "pod-secrets-d7bf82a9-4715-454f-bc94-5b0f2a95fe3b" satisfied condition "Succeeded or Failed"
May  4 12:06:42.059: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-d7bf82a9-4715-454f-bc94-5b0f2a95fe3b container secret-volume-test: 
STEP: delete the pod
May  4 12:06:42.121: INFO: Waiting for pod pod-secrets-d7bf82a9-4715-454f-bc94-5b0f2a95fe3b to disappear
May  4 12:06:42.133: INFO: Pod pod-secrets-d7bf82a9-4715-454f-bc94-5b0f2a95fe3b no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:06:42.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9986" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3692,"failed":0}
SSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:06:42.142: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-5580
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
May  4 12:06:42.265: INFO: Found 0 stateful pods, waiting for 3
May  4 12:06:52.272: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  4 12:06:52.272: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  4 12:06:52.272: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
May  4 12:07:02.271: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  4 12:07:02.271: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  4 12:07:02.271: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
May  4 12:07:02.299: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
May  4 12:07:12.407: INFO: Updating stateful set ss2
May  4 12:07:12.415: INFO: Waiting for Pod statefulset-5580/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  4 12:07:22.424: INFO: Waiting for Pod statefulset-5580/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
May  4 12:07:32.967: INFO: Found 2 stateful pods, waiting for 3
May  4 12:07:42.973: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  4 12:07:42.973: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  4 12:07:42.973: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
May  4 12:07:42.999: INFO: Updating stateful set ss2
May  4 12:07:43.007: INFO: Waiting for Pod statefulset-5580/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  4 12:07:53.015: INFO: Waiting for Pod statefulset-5580/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  4 12:08:03.034: INFO: Updating stateful set ss2
May  4 12:08:03.108: INFO: Waiting for StatefulSet statefulset-5580/ss2 to complete update
May  4 12:08:03.108: INFO: Waiting for Pod statefulset-5580/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  4 12:08:13.116: INFO: Waiting for StatefulSet statefulset-5580/ss2 to complete update
May  4 12:08:13.116: INFO: Waiting for Pod statefulset-5580/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May  4 12:08:23.117: INFO: Deleting all statefulset in ns statefulset-5580
May  4 12:08:23.119: INFO: Scaling statefulset ss2 to 0
May  4 12:08:53.157: INFO: Waiting for statefulset status.replicas updated to 0
May  4 12:08:53.160: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:08:53.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5580" for this suite.

• [SLOW TEST:131.041 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":214,"skipped":3696,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:08:53.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May  4 12:08:53.260: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:08:59.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-8682" for this suite.

• [SLOW TEST:6.296 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":215,"skipped":3706,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:08:59.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 12:08:59.591: INFO: Create a RollingUpdate DaemonSet
May  4 12:08:59.594: INFO: Check that daemon pods launch on every node of the cluster
May  4 12:08:59.616: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 12:08:59.676: INFO: Number of nodes with available pods: 0
May  4 12:08:59.676: INFO: Node kali-worker is running more than one daemon pod
May  4 12:09:00.682: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 12:09:00.685: INFO: Number of nodes with available pods: 0
May  4 12:09:00.685: INFO: Node kali-worker is running more than one daemon pod
May  4 12:09:01.682: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 12:09:01.686: INFO: Number of nodes with available pods: 0
May  4 12:09:01.686: INFO: Node kali-worker is running more than one daemon pod
May  4 12:09:02.682: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 12:09:02.685: INFO: Number of nodes with available pods: 0
May  4 12:09:02.685: INFO: Node kali-worker is running more than one daemon pod
May  4 12:09:03.681: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 12:09:03.708: INFO: Number of nodes with available pods: 1
May  4 12:09:03.708: INFO: Node kali-worker2 is running more than one daemon pod
May  4 12:09:04.702: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 12:09:04.739: INFO: Number of nodes with available pods: 2
May  4 12:09:04.739: INFO: Number of running nodes: 2, number of available pods: 2
May  4 12:09:04.739: INFO: Update the DaemonSet to trigger a rollout
May  4 12:09:04.761: INFO: Updating DaemonSet daemon-set
May  4 12:09:13.830: INFO: Roll back the DaemonSet before rollout is complete
May  4 12:09:13.836: INFO: Updating DaemonSet daemon-set
May  4 12:09:13.836: INFO: Make sure DaemonSet rollback is complete
May  4 12:09:13.844: INFO: Wrong image for pod: daemon-set-bmxns. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  4 12:09:13.844: INFO: Pod daemon-set-bmxns is not available
May  4 12:09:13.867: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 12:09:14.872: INFO: Wrong image for pod: daemon-set-bmxns. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
May  4 12:09:14.872: INFO: Pod daemon-set-bmxns is not available
May  4 12:09:14.915: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
May  4 12:09:15.923: INFO: Pod daemon-set-kvcqj is not available
May  4 12:09:15.956: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4127, will wait for the garbage collector to delete the pods
May  4 12:09:16.076: INFO: Deleting DaemonSet.extensions daemon-set took: 33.287518ms
May  4 12:09:16.376: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.299636ms
May  4 12:09:23.780: INFO: Number of nodes with available pods: 0
May  4 12:09:23.780: INFO: Number of running nodes: 0, number of available pods: 0
May  4 12:09:23.783: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4127/daemonsets","resourceVersion":"1437974"},"items":null}

May  4 12:09:23.786: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4127/pods","resourceVersion":"1437974"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:09:23.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4127" for this suite.

• [SLOW TEST:24.325 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":216,"skipped":3714,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:09:23.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 12:09:23.890: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:09:25.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9740" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":217,"skipped":3718,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:09:25.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  4 12:09:25.283: INFO: Waiting up to 5m0s for pod "downwardapi-volume-936aa811-2dbe-44fa-b5e7-2d6011a5e535" in namespace "downward-api-8597" to be "Succeeded or Failed"
May  4 12:09:25.302: INFO: Pod "downwardapi-volume-936aa811-2dbe-44fa-b5e7-2d6011a5e535": Phase="Pending", Reason="", readiness=false. Elapsed: 18.883979ms
May  4 12:09:27.306: INFO: Pod "downwardapi-volume-936aa811-2dbe-44fa-b5e7-2d6011a5e535": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022924577s
May  4 12:09:29.310: INFO: Pod "downwardapi-volume-936aa811-2dbe-44fa-b5e7-2d6011a5e535": Phase="Running", Reason="", readiness=true. Elapsed: 4.027204357s
May  4 12:09:31.315: INFO: Pod "downwardapi-volume-936aa811-2dbe-44fa-b5e7-2d6011a5e535": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.031542986s
STEP: Saw pod success
May  4 12:09:31.315: INFO: Pod "downwardapi-volume-936aa811-2dbe-44fa-b5e7-2d6011a5e535" satisfied condition "Succeeded or Failed"
May  4 12:09:31.318: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-936aa811-2dbe-44fa-b5e7-2d6011a5e535 container client-container: 
STEP: delete the pod
May  4 12:09:31.367: INFO: Waiting for pod downwardapi-volume-936aa811-2dbe-44fa-b5e7-2d6011a5e535 to disappear
May  4 12:09:31.380: INFO: Pod downwardapi-volume-936aa811-2dbe-44fa-b5e7-2d6011a5e535 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:09:31.380: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8597" for this suite.

• [SLOW TEST:6.202 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3724,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:09:31.389: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test env composition
May  4 12:09:31.485: INFO: Waiting up to 5m0s for pod "var-expansion-5a1fc10e-a7a5-4710-bd61-81cdec721a76" in namespace "var-expansion-1847" to be "Succeeded or Failed"
May  4 12:09:31.505: INFO: Pod "var-expansion-5a1fc10e-a7a5-4710-bd61-81cdec721a76": Phase="Pending", Reason="", readiness=false. Elapsed: 19.487201ms
May  4 12:09:33.509: INFO: Pod "var-expansion-5a1fc10e-a7a5-4710-bd61-81cdec721a76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023468606s
May  4 12:09:35.513: INFO: Pod "var-expansion-5a1fc10e-a7a5-4710-bd61-81cdec721a76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027693682s
STEP: Saw pod success
May  4 12:09:35.513: INFO: Pod "var-expansion-5a1fc10e-a7a5-4710-bd61-81cdec721a76" satisfied condition "Succeeded or Failed"
May  4 12:09:35.516: INFO: Trying to get logs from node kali-worker pod var-expansion-5a1fc10e-a7a5-4710-bd61-81cdec721a76 container dapi-container: 
STEP: delete the pod
May  4 12:09:35.531: INFO: Waiting for pod var-expansion-5a1fc10e-a7a5-4710-bd61-81cdec721a76 to disappear
May  4 12:09:35.595: INFO: Pod var-expansion-5a1fc10e-a7a5-4710-bd61-81cdec721a76 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:09:35.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-1847" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3765,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:09:35.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 12:09:35.742: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config version'
May  4 12:09:35.911: INFO: stderr: ""
May  4 12:09:35.911: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T17:28:31Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.2\", GitCommit:\"52c56ce7a8272c798dbc29846288d7cd9fbae032\", GitTreeState:\"clean\", BuildDate:\"2020-04-28T05:35:31Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:09:35.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5526" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":275,"completed":220,"skipped":3766,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:09:35.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-f3545ee7-45e1-4391-a086-d5604661b87d
STEP: Creating a pod to test consume secrets
May  4 12:09:36.034: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2a44e52e-970d-421f-9219-bdfda03e9dfb" in namespace "projected-4485" to be "Succeeded or Failed"
May  4 12:09:36.063: INFO: Pod "pod-projected-secrets-2a44e52e-970d-421f-9219-bdfda03e9dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 29.263375ms
May  4 12:09:38.068: INFO: Pod "pod-projected-secrets-2a44e52e-970d-421f-9219-bdfda03e9dfb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033718036s
May  4 12:09:40.072: INFO: Pod "pod-projected-secrets-2a44e52e-970d-421f-9219-bdfda03e9dfb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038487533s
STEP: Saw pod success
May  4 12:09:40.072: INFO: Pod "pod-projected-secrets-2a44e52e-970d-421f-9219-bdfda03e9dfb" satisfied condition "Succeeded or Failed"
May  4 12:09:40.076: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-2a44e52e-970d-421f-9219-bdfda03e9dfb container secret-volume-test: 
STEP: delete the pod
May  4 12:09:40.131: INFO: Waiting for pod pod-projected-secrets-2a44e52e-970d-421f-9219-bdfda03e9dfb to disappear
May  4 12:09:40.138: INFO: Pod pod-projected-secrets-2a44e52e-970d-421f-9219-bdfda03e9dfb no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:09:40.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4485" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":221,"skipped":3804,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:09:40.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-69572179-ff32-45da-99df-e40fb9de7f79
STEP: Creating configMap with name cm-test-opt-upd-8e84a221-adcd-4824-96ec-4bc5c953db12
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-69572179-ff32-45da-99df-e40fb9de7f79
STEP: Updating configmap cm-test-opt-upd-8e84a221-adcd-4824-96ec-4bc5c953db12
STEP: Creating configMap with name cm-test-opt-create-277015cd-3649-4fff-a9e4-f633b646f530
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:09:50.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1272" for this suite.

• [SLOW TEST:10.289 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3810,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:09:50.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  4 12:09:50.502: INFO: Waiting up to 5m0s for pod "downwardapi-volume-65c87a4c-bd64-4fe0-a0a2-ca32f5165ffc" in namespace "projected-482" to be "Succeeded or Failed"
May  4 12:09:50.549: INFO: Pod "downwardapi-volume-65c87a4c-bd64-4fe0-a0a2-ca32f5165ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 46.666485ms
May  4 12:09:52.571: INFO: Pod "downwardapi-volume-65c87a4c-bd64-4fe0-a0a2-ca32f5165ffc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068939063s
May  4 12:09:54.591: INFO: Pod "downwardapi-volume-65c87a4c-bd64-4fe0-a0a2-ca32f5165ffc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.088273079s
STEP: Saw pod success
May  4 12:09:54.591: INFO: Pod "downwardapi-volume-65c87a4c-bd64-4fe0-a0a2-ca32f5165ffc" satisfied condition "Succeeded or Failed"
May  4 12:09:54.593: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-65c87a4c-bd64-4fe0-a0a2-ca32f5165ffc container client-container: 
STEP: delete the pod
May  4 12:09:54.634: INFO: Waiting for pod downwardapi-volume-65c87a4c-bd64-4fe0-a0a2-ca32f5165ffc to disappear
May  4 12:09:54.638: INFO: Pod downwardapi-volume-65c87a4c-bd64-4fe0-a0a2-ca32f5165ffc no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:09:54.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-482" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":223,"skipped":3850,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:09:54.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-059b0b99-2fcc-4732-b942-ad7697b5142c
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:09:55.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9014" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":224,"skipped":3883,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:09:55.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:09:59.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4513" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3892,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:09:59.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 12:09:59.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
May  4 12:10:02.230: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2075 create -f -'
May  4 12:10:06.172: INFO: stderr: ""
May  4 12:10:06.172: INFO: stdout: "e2e-test-crd-publish-openapi-92-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
May  4 12:10:06.172: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2075 delete e2e-test-crd-publish-openapi-92-crds test-cr'
May  4 12:10:06.302: INFO: stderr: ""
May  4 12:10:06.302: INFO: stdout: "e2e-test-crd-publish-openapi-92-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
May  4 12:10:06.302: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2075 apply -f -'
May  4 12:10:06.582: INFO: stderr: ""
May  4 12:10:06.582: INFO: stdout: "e2e-test-crd-publish-openapi-92-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
May  4 12:10:06.582: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2075 delete e2e-test-crd-publish-openapi-92-crds test-cr'
May  4 12:10:06.704: INFO: stderr: ""
May  4 12:10:06.704: INFO: stdout: "e2e-test-crd-publish-openapi-92-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
May  4 12:10:06.704: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-92-crds'
May  4 12:10:06.941: INFO: stderr: ""
May  4 12:10:06.941: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-92-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:10:09.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2075" for this suite.

• [SLOW TEST:10.680 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":226,"skipped":3952,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:10:09.874: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
May  4 12:10:09.996: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3077 /api/v1/namespaces/watch-3077/configmaps/e2e-watch-test-label-changed e9e0f2d2-0d00-493d-832a-7bdb47115504 1438368 0 2020-05-04 12:10:09 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-04 12:10:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May  4 12:10:09.997: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3077 /api/v1/namespaces/watch-3077/configmaps/e2e-watch-test-label-changed e9e0f2d2-0d00-493d-832a-7bdb47115504 1438369 0 2020-05-04 12:10:09 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-04 12:10:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
May  4 12:10:09.997: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3077 /api/v1/namespaces/watch-3077/configmaps/e2e-watch-test-label-changed e9e0f2d2-0d00-493d-832a-7bdb47115504 1438370 0 2020-05-04 12:10:09 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-04 12:10:09 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
May  4 12:10:20.084: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3077 /api/v1/namespaces/watch-3077/configmaps/e2e-watch-test-label-changed e9e0f2d2-0d00-493d-832a-7bdb47115504 1438405 0 2020-05-04 12:10:09 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-04 12:10:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May  4 12:10:20.085: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3077 /api/v1/namespaces/watch-3077/configmaps/e2e-watch-test-label-changed e9e0f2d2-0d00-493d-832a-7bdb47115504 1438406 0 2020-05-04 12:10:09 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-04 12:10:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
May  4 12:10:20.085: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-3077 /api/v1/namespaces/watch-3077/configmaps/e2e-watch-test-label-changed e9e0f2d2-0d00-493d-832a-7bdb47115504 1438407 0 2020-05-04 12:10:09 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-05-04 12:10:20 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:10:20.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3077" for this suite.

• [SLOW TEST:10.229 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":227,"skipped":3990,"failed":0}
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:10:20.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
May  4 12:10:28.321: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May  4 12:10:28.328: INFO: Pod pod-with-poststart-exec-hook still exists
May  4 12:10:30.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May  4 12:10:30.333: INFO: Pod pod-with-poststart-exec-hook still exists
May  4 12:10:32.328: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
May  4 12:10:32.333: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:10:32.333: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3017" for this suite.

• [SLOW TEST:12.239 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3992,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:10:32.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:10:32.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5694" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":229,"skipped":4005,"failed":0}
SSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:10:32.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
May  4 12:10:40.770: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May  4 12:10:40.855: INFO: Pod pod-with-prestop-exec-hook still exists
May  4 12:10:42.855: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May  4 12:10:42.859: INFO: Pod pod-with-prestop-exec-hook still exists
May  4 12:10:44.855: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May  4 12:10:44.859: INFO: Pod pod-with-prestop-exec-hook still exists
May  4 12:10:46.855: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May  4 12:10:46.860: INFO: Pod pod-with-prestop-exec-hook still exists
May  4 12:10:48.855: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May  4 12:10:48.860: INFO: Pod pod-with-prestop-exec-hook still exists
May  4 12:10:50.855: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May  4 12:10:50.859: INFO: Pod pod-with-prestop-exec-hook still exists
May  4 12:10:52.855: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May  4 12:10:52.860: INFO: Pod pod-with-prestop-exec-hook still exists
May  4 12:10:54.855: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
May  4 12:10:54.859: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:10:54.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2219" for this suite.

• [SLOW TEST:22.335 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":230,"skipped":4008,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:10:54.875: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
May  4 12:10:54.943: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-943 /api/v1/namespaces/watch-943/configmaps/e2e-watch-test-watch-closed 0501ed99-aa34-4b8e-9354-41068423cc70 1438598 0 2020-05-04 12:10:54 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-04 12:10:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
May  4 12:10:54.943: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-943 /api/v1/namespaces/watch-943/configmaps/e2e-watch-test-watch-closed 0501ed99-aa34-4b8e-9354-41068423cc70 1438599 0 2020-05-04 12:10:54 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-04 12:10:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
May  4 12:10:54.977: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-943 /api/v1/namespaces/watch-943/configmaps/e2e-watch-test-watch-closed 0501ed99-aa34-4b8e-9354-41068423cc70 1438600 0 2020-05-04 12:10:54 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-04 12:10:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
May  4 12:10:54.977: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-943 /api/v1/namespaces/watch-943/configmaps/e2e-watch-test-watch-closed 0501ed99-aa34-4b8e-9354-41068423cc70 1438601 0 2020-05-04 12:10:54 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-05-04 12:10:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:10:54.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-943" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":231,"skipped":4028,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:10:54.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-32e17266-ada1-4fbd-b77a-537c8c31546c
STEP: Creating a pod to test consume configMaps
May  4 12:10:55.066: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5b04eaca-3d37-4ca4-84c9-b3dead2b52a5" in namespace "projected-9865" to be "Succeeded or Failed"
May  4 12:10:55.113: INFO: Pod "pod-projected-configmaps-5b04eaca-3d37-4ca4-84c9-b3dead2b52a5": Phase="Pending", Reason="", readiness=false. Elapsed: 47.106553ms
May  4 12:10:57.117: INFO: Pod "pod-projected-configmaps-5b04eaca-3d37-4ca4-84c9-b3dead2b52a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051189933s
May  4 12:10:59.121: INFO: Pod "pod-projected-configmaps-5b04eaca-3d37-4ca4-84c9-b3dead2b52a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055379644s
STEP: Saw pod success
May  4 12:10:59.121: INFO: Pod "pod-projected-configmaps-5b04eaca-3d37-4ca4-84c9-b3dead2b52a5" satisfied condition "Succeeded or Failed"
May  4 12:10:59.125: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-5b04eaca-3d37-4ca4-84c9-b3dead2b52a5 container projected-configmap-volume-test: 
STEP: delete the pod
May  4 12:10:59.216: INFO: Waiting for pod pod-projected-configmaps-5b04eaca-3d37-4ca4-84c9-b3dead2b52a5 to disappear
May  4 12:10:59.224: INFO: Pod pod-projected-configmaps-5b04eaca-3d37-4ca4-84c9-b3dead2b52a5 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:10:59.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9865" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":232,"skipped":4039,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:10:59.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
May  4 12:10:59.361: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:11:15.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7450" for this suite.

• [SLOW TEST:16.116 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":233,"skipped":4045,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:11:15.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
May  4 12:11:19.436: INFO: Pod pod-hostip-8a3732e5-eb0e-4301-979f-886da4335b62 has hostIP: 172.17.0.15
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:11:19.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5424" for this suite.
•{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":4048,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:11:19.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  4 12:11:19.540: INFO: Waiting up to 5m0s for pod "downwardapi-volume-094e95de-a50f-4854-8cb3-e43c9489a0cb" in namespace "projected-5431" to be "Succeeded or Failed"
May  4 12:11:19.577: INFO: Pod "downwardapi-volume-094e95de-a50f-4854-8cb3-e43c9489a0cb": Phase="Pending", Reason="", readiness=false. Elapsed: 36.575066ms
May  4 12:11:21.582: INFO: Pod "downwardapi-volume-094e95de-a50f-4854-8cb3-e43c9489a0cb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041354837s
May  4 12:11:23.587: INFO: Pod "downwardapi-volume-094e95de-a50f-4854-8cb3-e43c9489a0cb": Phase="Running", Reason="", readiness=true. Elapsed: 4.046255282s
May  4 12:11:25.590: INFO: Pod "downwardapi-volume-094e95de-a50f-4854-8cb3-e43c9489a0cb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.050052603s
STEP: Saw pod success
May  4 12:11:25.590: INFO: Pod "downwardapi-volume-094e95de-a50f-4854-8cb3-e43c9489a0cb" satisfied condition "Succeeded or Failed"
May  4 12:11:25.592: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-094e95de-a50f-4854-8cb3-e43c9489a0cb container client-container: 
STEP: delete the pod
May  4 12:11:25.690: INFO: Waiting for pod downwardapi-volume-094e95de-a50f-4854-8cb3-e43c9489a0cb to disappear
May  4 12:11:25.703: INFO: Pod downwardapi-volume-094e95de-a50f-4854-8cb3-e43c9489a0cb no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:11:25.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5431" for this suite.

• [SLOW TEST:6.267 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":235,"skipped":4061,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:11:25.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  4 12:11:25.898: INFO: Waiting up to 5m0s for pod "downwardapi-volume-91391d2d-86fd-4647-89e8-eee6fae397a9" in namespace "projected-7717" to be "Succeeded or Failed"
May  4 12:11:25.910: INFO: Pod "downwardapi-volume-91391d2d-86fd-4647-89e8-eee6fae397a9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.105008ms
May  4 12:11:27.914: INFO: Pod "downwardapi-volume-91391d2d-86fd-4647-89e8-eee6fae397a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015899642s
May  4 12:11:29.917: INFO: Pod "downwardapi-volume-91391d2d-86fd-4647-89e8-eee6fae397a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01890728s
STEP: Saw pod success
May  4 12:11:29.917: INFO: Pod "downwardapi-volume-91391d2d-86fd-4647-89e8-eee6fae397a9" satisfied condition "Succeeded or Failed"
May  4 12:11:29.920: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-91391d2d-86fd-4647-89e8-eee6fae397a9 container client-container: 
STEP: delete the pod
May  4 12:11:29.982: INFO: Waiting for pod downwardapi-volume-91391d2d-86fd-4647-89e8-eee6fae397a9 to disappear
May  4 12:11:29.995: INFO: Pod downwardapi-volume-91391d2d-86fd-4647-89e8-eee6fae397a9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:11:29.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7717" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":4071,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:11:30.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  4 12:11:30.127: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b548125d-edb7-41cb-9a9d-d42f84a0c602" in namespace "projected-6089" to be "Succeeded or Failed"
May  4 12:11:30.163: INFO: Pod "downwardapi-volume-b548125d-edb7-41cb-9a9d-d42f84a0c602": Phase="Pending", Reason="", readiness=false. Elapsed: 35.269293ms
May  4 12:11:32.167: INFO: Pod "downwardapi-volume-b548125d-edb7-41cb-9a9d-d42f84a0c602": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039417749s
May  4 12:11:34.171: INFO: Pod "downwardapi-volume-b548125d-edb7-41cb-9a9d-d42f84a0c602": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043839381s
STEP: Saw pod success
May  4 12:11:34.171: INFO: Pod "downwardapi-volume-b548125d-edb7-41cb-9a9d-d42f84a0c602" satisfied condition "Succeeded or Failed"
May  4 12:11:34.174: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-b548125d-edb7-41cb-9a9d-d42f84a0c602 container client-container: 
STEP: delete the pod
May  4 12:11:34.247: INFO: Waiting for pod downwardapi-volume-b548125d-edb7-41cb-9a9d-d42f84a0c602 to disappear
May  4 12:11:34.255: INFO: Pod downwardapi-volume-b548125d-edb7-41cb-9a9d-d42f84a0c602 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:11:34.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6089" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":4097,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:11:34.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  4 12:11:34.906: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  4 12:11:36.938: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191094, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191094, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191095, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191094, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  4 12:11:39.978: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 12:11:39.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2057-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:11:41.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8637" for this suite.
STEP: Destroying namespace "webhook-8637-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.936 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":238,"skipped":4130,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:11:41.200: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3507.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3507.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3507.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3507.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  4 12:11:47.365: INFO: DNS probes using dns-test-f6b06210-f27d-4e18-814d-a48293fc2612 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3507.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3507.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3507.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3507.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  4 12:11:55.542: INFO: File wheezy_udp@dns-test-service-3.dns-3507.svc.cluster.local from pod  dns-3507/dns-test-e63e7eb3-d9c5-4a89-b639-4dcaa6d45fdd contains 'foo.example.com.
' instead of 'bar.example.com.'
May  4 12:11:55.547: INFO: Lookups using dns-3507/dns-test-e63e7eb3-d9c5-4a89-b639-4dcaa6d45fdd failed for: [wheezy_udp@dns-test-service-3.dns-3507.svc.cluster.local]

May  4 12:12:00.551: INFO: File wheezy_udp@dns-test-service-3.dns-3507.svc.cluster.local from pod  dns-3507/dns-test-e63e7eb3-d9c5-4a89-b639-4dcaa6d45fdd contains 'foo.example.com.
' instead of 'bar.example.com.'
May  4 12:12:00.555: INFO: Lookups using dns-3507/dns-test-e63e7eb3-d9c5-4a89-b639-4dcaa6d45fdd failed for: [wheezy_udp@dns-test-service-3.dns-3507.svc.cluster.local]

May  4 12:12:05.552: INFO: File wheezy_udp@dns-test-service-3.dns-3507.svc.cluster.local from pod  dns-3507/dns-test-e63e7eb3-d9c5-4a89-b639-4dcaa6d45fdd contains 'foo.example.com.
' instead of 'bar.example.com.'
May  4 12:12:05.556: INFO: Lookups using dns-3507/dns-test-e63e7eb3-d9c5-4a89-b639-4dcaa6d45fdd failed for: [wheezy_udp@dns-test-service-3.dns-3507.svc.cluster.local]

May  4 12:12:10.552: INFO: File wheezy_udp@dns-test-service-3.dns-3507.svc.cluster.local from pod  dns-3507/dns-test-e63e7eb3-d9c5-4a89-b639-4dcaa6d45fdd contains 'foo.example.com.
' instead of 'bar.example.com.'
May  4 12:12:10.556: INFO: Lookups using dns-3507/dns-test-e63e7eb3-d9c5-4a89-b639-4dcaa6d45fdd failed for: [wheezy_udp@dns-test-service-3.dns-3507.svc.cluster.local]

May  4 12:12:15.556: INFO: DNS probes using dns-test-e63e7eb3-d9c5-4a89-b639-4dcaa6d45fdd succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3507.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3507.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3507.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3507.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  4 12:12:22.566: INFO: DNS probes using dns-test-438d80b3-e479-4789-a531-b8820ac49677 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:12:22.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3507" for this suite.

• [SLOW TEST:41.715 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":239,"skipped":4135,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:12:22.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  4 12:12:23.657: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  4 12:12:25.683: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191143, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191143, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191143, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191143, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  4 12:12:28.740: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 12:12:28.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-567-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:12:29.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3127" for this suite.
STEP: Destroying namespace "webhook-3127-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.060 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":240,"skipped":4178,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:12:29.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-10cebf59-55f0-41bf-85c3-62c2f66464f8 in namespace container-probe-5325
May  4 12:12:34.086: INFO: Started pod test-webserver-10cebf59-55f0-41bf-85c3-62c2f66464f8 in namespace container-probe-5325
STEP: checking the pod's current state and verifying that restartCount is present
May  4 12:12:34.090: INFO: Initial restart count of pod test-webserver-10cebf59-55f0-41bf-85c3-62c2f66464f8 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:16:35.164: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5325" for this suite.

• [SLOW TEST:245.552 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":4179,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:16:35.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-9ca910e8-2a14-419e-bb63-840310de1ea8
STEP: Creating a pod to test consume configMaps
May  4 12:16:35.721: INFO: Waiting up to 5m0s for pod "pod-configmaps-464db6d1-1f81-46de-a88b-42c243a32e3e" in namespace "configmap-4810" to be "Succeeded or Failed"
May  4 12:16:35.740: INFO: Pod "pod-configmaps-464db6d1-1f81-46de-a88b-42c243a32e3e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.831418ms
May  4 12:16:37.744: INFO: Pod "pod-configmaps-464db6d1-1f81-46de-a88b-42c243a32e3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022364037s
May  4 12:16:39.748: INFO: Pod "pod-configmaps-464db6d1-1f81-46de-a88b-42c243a32e3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02684678s
STEP: Saw pod success
May  4 12:16:39.748: INFO: Pod "pod-configmaps-464db6d1-1f81-46de-a88b-42c243a32e3e" satisfied condition "Succeeded or Failed"
May  4 12:16:39.751: INFO: Trying to get logs from node kali-worker pod pod-configmaps-464db6d1-1f81-46de-a88b-42c243a32e3e container configmap-volume-test: 
STEP: delete the pod
May  4 12:16:39.816: INFO: Waiting for pod pod-configmaps-464db6d1-1f81-46de-a88b-42c243a32e3e to disappear
May  4 12:16:39.823: INFO: Pod pod-configmaps-464db6d1-1f81-46de-a88b-42c243a32e3e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:16:39.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4810" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":242,"skipped":4185,"failed":0}

------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:16:39.859: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0504 12:16:52.483305       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
May  4 12:16:52.483: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:16:52.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7747" for this suite.

• [SLOW TEST:12.631 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":243,"skipped":4185,"failed":0}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:16:52.490: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-42a71890-1603-4b45-9561-84c0e4d2184f
STEP: Creating a pod to test consume configMaps
May  4 12:16:52.670: INFO: Waiting up to 5m0s for pod "pod-configmaps-102e6ec5-1ca6-4d6f-bbfe-e298661631ff" in namespace "configmap-5227" to be "Succeeded or Failed"
May  4 12:16:52.687: INFO: Pod "pod-configmaps-102e6ec5-1ca6-4d6f-bbfe-e298661631ff": Phase="Pending", Reason="", readiness=false. Elapsed: 16.620548ms
May  4 12:16:54.691: INFO: Pod "pod-configmaps-102e6ec5-1ca6-4d6f-bbfe-e298661631ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02062779s
May  4 12:16:56.725: INFO: Pod "pod-configmaps-102e6ec5-1ca6-4d6f-bbfe-e298661631ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054662329s
STEP: Saw pod success
May  4 12:16:56.725: INFO: Pod "pod-configmaps-102e6ec5-1ca6-4d6f-bbfe-e298661631ff" satisfied condition "Succeeded or Failed"
May  4 12:16:56.728: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-102e6ec5-1ca6-4d6f-bbfe-e298661631ff container configmap-volume-test: 
STEP: delete the pod
May  4 12:16:56.959: INFO: Waiting for pod pod-configmaps-102e6ec5-1ca6-4d6f-bbfe-e298661631ff to disappear
May  4 12:16:57.031: INFO: Pod pod-configmaps-102e6ec5-1ca6-4d6f-bbfe-e298661631ff no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:16:57.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5227" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":244,"skipped":4191,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:16:57.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
May  4 12:17:03.733: INFO: Successfully updated pod "labelsupdate6fbe31a9-3218-46f7-9592-0d1f3e2a8809"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:17:05.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4476" for this suite.

• [SLOW TEST:8.797 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":245,"skipped":4208,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:17:05.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-54a7ce27-09a7-4eb1-acb6-f5ebb4c432d4
STEP: Creating a pod to test consume configMaps
May  4 12:17:05.977: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f16646f-55ce-4979-bb3f-79054988d73a" in namespace "configmap-8120" to be "Succeeded or Failed"
May  4 12:17:05.980: INFO: Pod "pod-configmaps-8f16646f-55ce-4979-bb3f-79054988d73a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.208346ms
May  4 12:17:07.999: INFO: Pod "pod-configmaps-8f16646f-55ce-4979-bb3f-79054988d73a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022290609s
May  4 12:17:10.004: INFO: Pod "pod-configmaps-8f16646f-55ce-4979-bb3f-79054988d73a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026691333s
STEP: Saw pod success
May  4 12:17:10.004: INFO: Pod "pod-configmaps-8f16646f-55ce-4979-bb3f-79054988d73a" satisfied condition "Succeeded or Failed"
May  4 12:17:10.007: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-8f16646f-55ce-4979-bb3f-79054988d73a container configmap-volume-test: 
STEP: delete the pod
May  4 12:17:10.059: INFO: Waiting for pod pod-configmaps-8f16646f-55ce-4979-bb3f-79054988d73a to disappear
May  4 12:17:10.070: INFO: Pod pod-configmaps-8f16646f-55ce-4979-bb3f-79054988d73a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:17:10.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8120" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4218,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:17:10.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  4 12:17:10.771: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  4 12:17:12.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191430, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191430, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191430, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191430, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  4 12:17:16.007: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:17:16.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3087" for this suite.
STEP: Destroying namespace "webhook-3087-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.228 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":247,"skipped":4225,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:17:16.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  4 12:17:17.167: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  4 12:17:19.188: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191437, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191437, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191437, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191437, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  4 12:17:22.253: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:17:32.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8727" for this suite.
STEP: Destroying namespace "webhook-8727-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.264 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":248,"skipped":4225,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:17:32.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
May  4 12:17:32.663: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:17:40.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9316" for this suite.

• [SLOW TEST:7.900 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":249,"skipped":4233,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:17:40.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
May  4 12:17:40.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:17:55.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6132" for this suite.

• [SLOW TEST:15.125 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":250,"skipped":4262,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:17:55.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-f84140ed-ee69-49ed-9b53-60897932bbc5
STEP: Creating a pod to test consume configMaps
May  4 12:17:55.774: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3c1b3e7f-4de4-4ada-bab9-1b2f6d801973" in namespace "projected-6997" to be "Succeeded or Failed"
May  4 12:17:55.785: INFO: Pod "pod-projected-configmaps-3c1b3e7f-4de4-4ada-bab9-1b2f6d801973": Phase="Pending", Reason="", readiness=false. Elapsed: 10.807033ms
May  4 12:17:57.888: INFO: Pod "pod-projected-configmaps-3c1b3e7f-4de4-4ada-bab9-1b2f6d801973": Phase="Pending", Reason="", readiness=false. Elapsed: 2.11382038s
May  4 12:17:59.893: INFO: Pod "pod-projected-configmaps-3c1b3e7f-4de4-4ada-bab9-1b2f6d801973": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11909638s
STEP: Saw pod success
May  4 12:17:59.893: INFO: Pod "pod-projected-configmaps-3c1b3e7f-4de4-4ada-bab9-1b2f6d801973" satisfied condition "Succeeded or Failed"
May  4 12:17:59.897: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-3c1b3e7f-4de4-4ada-bab9-1b2f6d801973 container projected-configmap-volume-test: 
STEP: delete the pod
May  4 12:17:59.942: INFO: Waiting for pod pod-projected-configmaps-3c1b3e7f-4de4-4ada-bab9-1b2f6d801973 to disappear
May  4 12:17:59.988: INFO: Pod pod-projected-configmaps-3c1b3e7f-4de4-4ada-bab9-1b2f6d801973 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:17:59.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6997" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4281,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:17:59.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  4 12:18:00.966: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  4 12:18:03.217: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191480, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191480, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191481, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191480, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  4 12:18:06.260: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:18:06.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-362" for this suite.
STEP: Destroying namespace "webhook-362-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.963 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":252,"skipped":4291,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:18:06.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
May  4 12:18:08.353: INFO: Pod name wrapped-volume-race-40a7ca8d-f7a9-4525-a33a-5219209c352b: Found 0 pods out of 5
May  4 12:18:13.362: INFO: Pod name wrapped-volume-race-40a7ca8d-f7a9-4525-a33a-5219209c352b: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-40a7ca8d-f7a9-4525-a33a-5219209c352b in namespace emptydir-wrapper-134, will wait for the garbage collector to delete the pods
May  4 12:18:29.493: INFO: Deleting ReplicationController wrapped-volume-race-40a7ca8d-f7a9-4525-a33a-5219209c352b took: 8.260055ms
May  4 12:18:29.893: INFO: Terminating ReplicationController wrapped-volume-race-40a7ca8d-f7a9-4525-a33a-5219209c352b pods took: 400.248091ms
STEP: Creating RC which spawns configmap-volume pods
May  4 12:18:43.778: INFO: Pod name wrapped-volume-race-34cb7511-c5a2-427e-a3f0-c1b8851adf51: Found 0 pods out of 5
May  4 12:18:48.788: INFO: Pod name wrapped-volume-race-34cb7511-c5a2-427e-a3f0-c1b8851adf51: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-34cb7511-c5a2-427e-a3f0-c1b8851adf51 in namespace emptydir-wrapper-134, will wait for the garbage collector to delete the pods
May  4 12:19:02.874: INFO: Deleting ReplicationController wrapped-volume-race-34cb7511-c5a2-427e-a3f0-c1b8851adf51 took: 9.272512ms
May  4 12:19:03.274: INFO: Terminating ReplicationController wrapped-volume-race-34cb7511-c5a2-427e-a3f0-c1b8851adf51 pods took: 400.345434ms
STEP: Creating RC which spawns configmap-volume pods
May  4 12:19:13.628: INFO: Pod name wrapped-volume-race-bbfc445d-2954-43d6-89a8-59f26dcb7a0a: Found 0 pods out of 5
May  4 12:19:18.638: INFO: Pod name wrapped-volume-race-bbfc445d-2954-43d6-89a8-59f26dcb7a0a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bbfc445d-2954-43d6-89a8-59f26dcb7a0a in namespace emptydir-wrapper-134, will wait for the garbage collector to delete the pods
May  4 12:19:34.733: INFO: Deleting ReplicationController wrapped-volume-race-bbfc445d-2954-43d6-89a8-59f26dcb7a0a took: 8.018672ms
May  4 12:19:35.034: INFO: Terminating ReplicationController wrapped-volume-race-bbfc445d-2954-43d6-89a8-59f26dcb7a0a pods took: 300.24365ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:19:44.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-134" for this suite.

• [SLOW TEST:97.211 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":253,"skipped":4394,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:19:44.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  4 12:19:44.784: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  4 12:19:46.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191584, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191584, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191584, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191584, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
May  4 12:19:48.829: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191584, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191584, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191584, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191584, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  4 12:19:51.859: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:19:51.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-524" for this suite.
STEP: Destroying namespace "webhook-524-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.850 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":254,"skipped":4441,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:19:52.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-5522
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
May  4 12:19:52.219: INFO: Found 0 stateful pods, waiting for 3
May  4 12:20:02.224: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
May  4 12:20:02.224: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
May  4 12:20:02.224: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
May  4 12:20:02.234: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5522 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  4 12:20:02.499: INFO: stderr: "I0504 12:20:02.373541    3609 log.go:172] (0xc0000e8370) (0xc0006e9360) Create stream\nI0504 12:20:02.373609    3609 log.go:172] (0xc0000e8370) (0xc0006e9360) Stream added, broadcasting: 1\nI0504 12:20:02.382306    3609 log.go:172] (0xc0000e8370) Reply frame received for 1\nI0504 12:20:02.382371    3609 log.go:172] (0xc0000e8370) (0xc0003fa000) Create stream\nI0504 12:20:02.382403    3609 log.go:172] (0xc0000e8370) (0xc0003fa000) Stream added, broadcasting: 3\nI0504 12:20:02.383857    3609 log.go:172] (0xc0000e8370) Reply frame received for 3\nI0504 12:20:02.383919    3609 log.go:172] (0xc0000e8370) (0xc000228000) Create stream\nI0504 12:20:02.383952    3609 log.go:172] (0xc0000e8370) (0xc000228000) Stream added, broadcasting: 5\nI0504 12:20:02.384826    3609 log.go:172] (0xc0000e8370) Reply frame received for 5\nI0504 12:20:02.461719    3609 log.go:172] (0xc0000e8370) Data frame received for 5\nI0504 12:20:02.461749    3609 log.go:172] (0xc000228000) (5) Data frame handling\nI0504 12:20:02.461770    3609 log.go:172] (0xc000228000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0504 12:20:02.492154    3609 log.go:172] (0xc0000e8370) Data frame received for 3\nI0504 12:20:02.492189    3609 log.go:172] (0xc0003fa000) (3) Data frame handling\nI0504 12:20:02.492203    3609 log.go:172] (0xc0003fa000) (3) Data frame sent\nI0504 12:20:02.492217    3609 log.go:172] (0xc0000e8370) Data frame received for 3\nI0504 12:20:02.492222    3609 log.go:172] (0xc0003fa000) (3) Data frame handling\nI0504 12:20:02.492422    3609 log.go:172] (0xc0000e8370) Data frame received for 5\nI0504 12:20:02.492463    3609 log.go:172] (0xc000228000) (5) Data frame handling\nI0504 12:20:02.494491    3609 log.go:172] (0xc0000e8370) Data frame received for 1\nI0504 12:20:02.494529    3609 log.go:172] (0xc0006e9360) (1) Data frame handling\nI0504 12:20:02.494568    3609 log.go:172] (0xc0006e9360) (1) Data frame sent\nI0504 12:20:02.494586    3609 log.go:172] (0xc0000e8370) (0xc0006e9360) Stream removed, broadcasting: 1\nI0504 12:20:02.494611    3609 log.go:172] (0xc0000e8370) Go away received\nI0504 12:20:02.494962    3609 log.go:172] (0xc0000e8370) (0xc0006e9360) Stream removed, broadcasting: 1\nI0504 12:20:02.494985    3609 log.go:172] (0xc0000e8370) (0xc0003fa000) Stream removed, broadcasting: 3\nI0504 12:20:02.495008    3609 log.go:172] (0xc0000e8370) (0xc000228000) Stream removed, broadcasting: 5\n"
May  4 12:20:02.499: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  4 12:20:02.499: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
May  4 12:20:12.533: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
May  4 12:20:22.563: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5522 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  4 12:20:25.578: INFO: stderr: "I0504 12:20:25.487475    3629 log.go:172] (0xc0008e22c0) (0xc00089f680) Create stream\nI0504 12:20:25.487525    3629 log.go:172] (0xc0008e22c0) (0xc00089f680) Stream added, broadcasting: 1\nI0504 12:20:25.490664    3629 log.go:172] (0xc0008e22c0) Reply frame received for 1\nI0504 12:20:25.490719    3629 log.go:172] (0xc0008e22c0) (0xc0006d8960) Create stream\nI0504 12:20:25.490743    3629 log.go:172] (0xc0008e22c0) (0xc0006d8960) Stream added, broadcasting: 3\nI0504 12:20:25.491716    3629 log.go:172] (0xc0008e22c0) Reply frame received for 3\nI0504 12:20:25.491759    3629 log.go:172] (0xc0008e22c0) (0xc00089f720) Create stream\nI0504 12:20:25.491773    3629 log.go:172] (0xc0008e22c0) (0xc00089f720) Stream added, broadcasting: 5\nI0504 12:20:25.492852    3629 log.go:172] (0xc0008e22c0) Reply frame received for 5\nI0504 12:20:25.572647    3629 log.go:172] (0xc0008e22c0) Data frame received for 3\nI0504 12:20:25.572679    3629 log.go:172] (0xc0006d8960) (3) Data frame handling\nI0504 12:20:25.572687    3629 log.go:172] (0xc0006d8960) (3) Data frame sent\nI0504 12:20:25.572693    3629 log.go:172] (0xc0008e22c0) Data frame received for 3\nI0504 12:20:25.572697    3629 log.go:172] (0xc0006d8960) (3) Data frame handling\nI0504 12:20:25.572744    3629 log.go:172] (0xc0008e22c0) Data frame received for 5\nI0504 12:20:25.572782    3629 log.go:172] (0xc00089f720) (5) Data frame handling\nI0504 12:20:25.572809    3629 log.go:172] (0xc00089f720) (5) Data frame sent\nI0504 12:20:25.572829    3629 log.go:172] (0xc0008e22c0) Data frame received for 5\nI0504 12:20:25.572852    3629 log.go:172] (0xc00089f720) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0504 12:20:25.574256    3629 log.go:172] (0xc0008e22c0) Data frame received for 1\nI0504 12:20:25.574269    3629 log.go:172] (0xc00089f680) (1) Data frame handling\nI0504 12:20:25.574280    3629 log.go:172] (0xc00089f680) (1) Data frame sent\nI0504 12:20:25.574291    3629 log.go:172] (0xc0008e22c0) (0xc00089f680) Stream removed, broadcasting: 1\nI0504 12:20:25.574360    3629 log.go:172] (0xc0008e22c0) Go away received\nI0504 12:20:25.574665    3629 log.go:172] (0xc0008e22c0) (0xc00089f680) Stream removed, broadcasting: 1\nI0504 12:20:25.574681    3629 log.go:172] (0xc0008e22c0) (0xc0006d8960) Stream removed, broadcasting: 3\nI0504 12:20:25.574690    3629 log.go:172] (0xc0008e22c0) (0xc00089f720) Stream removed, broadcasting: 5\n"
May  4 12:20:25.578: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  4 12:20:25.578: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  4 12:20:35.802: INFO: Waiting for StatefulSet statefulset-5522/ss2 to complete update
May  4 12:20:35.802: INFO: Waiting for Pod statefulset-5522/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  4 12:20:35.802: INFO: Waiting for Pod statefulset-5522/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  4 12:20:45.811: INFO: Waiting for StatefulSet statefulset-5522/ss2 to complete update
May  4 12:20:45.811: INFO: Waiting for Pod statefulset-5522/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
May  4 12:20:55.811: INFO: Waiting for StatefulSet statefulset-5522/ss2 to complete update
STEP: Rolling back to a previous revision
May  4 12:21:05.811: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5522 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
May  4 12:21:06.084: INFO: stderr: "I0504 12:21:05.960304    3661 log.go:172] (0xc00098a000) (0xc0007ee000) Create stream\nI0504 12:21:05.960377    3661 log.go:172] (0xc00098a000) (0xc0007ee000) Stream added, broadcasting: 1\nI0504 12:21:05.962427    3661 log.go:172] (0xc00098a000) Reply frame received for 1\nI0504 12:21:05.962465    3661 log.go:172] (0xc00098a000) (0xc00083b400) Create stream\nI0504 12:21:05.962478    3661 log.go:172] (0xc00098a000) (0xc00083b400) Stream added, broadcasting: 3\nI0504 12:21:05.963510    3661 log.go:172] (0xc00098a000) Reply frame received for 3\nI0504 12:21:05.963545    3661 log.go:172] (0xc00098a000) (0xc0007ee140) Create stream\nI0504 12:21:05.963558    3661 log.go:172] (0xc00098a000) (0xc0007ee140) Stream added, broadcasting: 5\nI0504 12:21:05.964489    3661 log.go:172] (0xc00098a000) Reply frame received for 5\nI0504 12:21:06.045679    3661 log.go:172] (0xc00098a000) Data frame received for 5\nI0504 12:21:06.045703    3661 log.go:172] (0xc0007ee140) (5) Data frame handling\nI0504 12:21:06.045719    3661 log.go:172] (0xc0007ee140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0504 12:21:06.075342    3661 log.go:172] (0xc00098a000) Data frame received for 3\nI0504 12:21:06.075380    3661 log.go:172] (0xc00083b400) (3) Data frame handling\nI0504 12:21:06.075416    3661 log.go:172] (0xc00083b400) (3) Data frame sent\nI0504 12:21:06.075601    3661 log.go:172] (0xc00098a000) Data frame received for 5\nI0504 12:21:06.075636    3661 log.go:172] (0xc0007ee140) (5) Data frame handling\nI0504 12:21:06.075909    3661 log.go:172] (0xc00098a000) Data frame received for 3\nI0504 12:21:06.075943    3661 log.go:172] (0xc00083b400) (3) Data frame handling\nI0504 12:21:06.078925    3661 log.go:172] (0xc00098a000) Data frame received for 1\nI0504 12:21:06.078959    3661 log.go:172] (0xc0007ee000) (1) Data frame handling\nI0504 12:21:06.078994    3661 log.go:172] (0xc0007ee000) (1) Data frame sent\nI0504 12:21:06.079079    3661 log.go:172] (0xc00098a000) (0xc0007ee000) Stream removed, broadcasting: 1\nI0504 12:21:06.079431    3661 log.go:172] (0xc00098a000) Go away received\nI0504 12:21:06.079571    3661 log.go:172] (0xc00098a000) (0xc0007ee000) Stream removed, broadcasting: 1\nI0504 12:21:06.079603    3661 log.go:172] (0xc00098a000) (0xc00083b400) Stream removed, broadcasting: 3\nI0504 12:21:06.079623    3661 log.go:172] (0xc00098a000) (0xc0007ee140) Stream removed, broadcasting: 5\n"
May  4 12:21:06.084: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
May  4 12:21:06.084: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

May  4 12:21:16.128: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
May  4 12:21:26.207: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config exec --namespace=statefulset-5522 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
May  4 12:21:26.453: INFO: stderr: "I0504 12:21:26.370638    3682 log.go:172] (0xc00003a4d0) (0xc0006eb540) Create stream\nI0504 12:21:26.370688    3682 log.go:172] (0xc00003a4d0) (0xc0006eb540) Stream added, broadcasting: 1\nI0504 12:21:26.372771    3682 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0504 12:21:26.372808    3682 log.go:172] (0xc00003a4d0) (0xc000554000) Create stream\nI0504 12:21:26.372820    3682 log.go:172] (0xc00003a4d0) (0xc000554000) Stream added, broadcasting: 3\nI0504 12:21:26.373761    3682 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0504 12:21:26.373801    3682 log.go:172] (0xc00003a4d0) (0xc000346000) Create stream\nI0504 12:21:26.373812    3682 log.go:172] (0xc00003a4d0) (0xc000346000) Stream added, broadcasting: 5\nI0504 12:21:26.374676    3682 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0504 12:21:26.445844    3682 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0504 12:21:26.445995    3682 log.go:172] (0xc000554000) (3) Data frame handling\nI0504 12:21:26.446020    3682 log.go:172] (0xc000554000) (3) Data frame sent\nI0504 12:21:26.446050    3682 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0504 12:21:26.446067    3682 log.go:172] (0xc000346000) (5) Data frame handling\nI0504 12:21:26.446080    3682 log.go:172] (0xc000346000) (5) Data frame sent\nI0504 12:21:26.446101    3682 log.go:172] (0xc00003a4d0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0504 12:21:26.446120    3682 log.go:172] (0xc000346000) (5) Data frame handling\nI0504 12:21:26.446174    3682 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0504 12:21:26.446281    3682 log.go:172] (0xc000554000) (3) Data frame handling\nI0504 12:21:26.448054    3682 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0504 12:21:26.448154    3682 log.go:172] (0xc0006eb540) (1) Data frame handling\nI0504 12:21:26.448189    3682 log.go:172] (0xc0006eb540) (1) Data frame sent\nI0504 12:21:26.448236    3682 log.go:172] (0xc00003a4d0) (0xc0006eb540) Stream removed, broadcasting: 1\nI0504 12:21:26.448579    3682 log.go:172] (0xc00003a4d0) (0xc0006eb540) Stream removed, broadcasting: 1\nI0504 12:21:26.448595    3682 log.go:172] (0xc00003a4d0) (0xc000554000) Stream removed, broadcasting: 3\nI0504 12:21:26.448677    3682 log.go:172] (0xc00003a4d0) Go away received\nI0504 12:21:26.448709    3682 log.go:172] (0xc00003a4d0) (0xc000346000) Stream removed, broadcasting: 5\n"
May  4 12:21:26.453: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
May  4 12:21:26.453: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

May  4 12:21:36.474: INFO: Waiting for StatefulSet statefulset-5522/ss2 to complete update
May  4 12:21:36.474: INFO: Waiting for Pod statefulset-5522/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May  4 12:21:36.474: INFO: Waiting for Pod statefulset-5522/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
May  4 12:21:46.481: INFO: Waiting for StatefulSet statefulset-5522/ss2 to complete update
May  4 12:21:46.482: INFO: Waiting for Pod statefulset-5522/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May  4 12:21:56.483: INFO: Deleting all statefulset in ns statefulset-5522
May  4 12:21:56.486: INFO: Scaling statefulset ss2 to 0
May  4 12:22:26.505: INFO: Waiting for statefulset status.replicas updated to 0
May  4 12:22:26.508: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:22:26.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5522" for this suite.

• [SLOW TEST:154.574 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":255,"skipped":4460,"failed":0}
SSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:22:26.599: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2826 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2826;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2826 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2826;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2826.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2826.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2826.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2826.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2826.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2826.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2826.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2826.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2826.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2826.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2826.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2826.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2826.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 183.221.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.221.183_udp@PTR;check="$$(dig +tcp +noall +answer +search 183.221.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.221.183_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2826 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2826;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2826 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2826;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2826.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2826.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2826.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2826.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2826.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2826.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2826.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2826.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2826.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2826.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2826.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2826.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2826.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 183.221.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.221.183_udp@PTR;check="$$(dig +tcp +noall +answer +search 183.221.111.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.111.221.183_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  4 12:22:33.020: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.024: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.027: INFO: Unable to read wheezy_udp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.030: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.033: INFO: Unable to read wheezy_udp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.036: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.039: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.042: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.090: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.094: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.097: INFO: Unable to read jessie_udp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.100: INFO: Unable to read jessie_tcp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.103: INFO: Unable to read jessie_udp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.107: INFO: Unable to read jessie_tcp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.109: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.112: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:33.127: INFO: Lookups using dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2826 wheezy_tcp@dns-test-service.dns-2826 wheezy_udp@dns-test-service.dns-2826.svc wheezy_tcp@dns-test-service.dns-2826.svc wheezy_udp@_http._tcp.dns-test-service.dns-2826.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2826.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2826 jessie_tcp@dns-test-service.dns-2826 jessie_udp@dns-test-service.dns-2826.svc jessie_tcp@dns-test-service.dns-2826.svc jessie_udp@_http._tcp.dns-test-service.dns-2826.svc jessie_tcp@_http._tcp.dns-test-service.dns-2826.svc]

May  4 12:22:38.132: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.136: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.140: INFO: Unable to read wheezy_udp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.143: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.146: INFO: Unable to read wheezy_udp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.150: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.152: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.156: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.176: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.179: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.183: INFO: Unable to read jessie_udp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.186: INFO: Unable to read jessie_tcp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.189: INFO: Unable to read jessie_udp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.193: INFO: Unable to read jessie_tcp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.196: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.199: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:38.217: INFO: Lookups using dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2826 wheezy_tcp@dns-test-service.dns-2826 wheezy_udp@dns-test-service.dns-2826.svc wheezy_tcp@dns-test-service.dns-2826.svc wheezy_udp@_http._tcp.dns-test-service.dns-2826.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2826.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2826 jessie_tcp@dns-test-service.dns-2826 jessie_udp@dns-test-service.dns-2826.svc jessie_tcp@dns-test-service.dns-2826.svc jessie_udp@_http._tcp.dns-test-service.dns-2826.svc jessie_tcp@_http._tcp.dns-test-service.dns-2826.svc]

May  4 12:22:43.132: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.136: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.140: INFO: Unable to read wheezy_udp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.143: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.145: INFO: Unable to read wheezy_udp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.148: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.151: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.154: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.176: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.179: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.182: INFO: Unable to read jessie_udp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.184: INFO: Unable to read jessie_tcp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.188: INFO: Unable to read jessie_udp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.208: INFO: Unable to read jessie_tcp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.211: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.214: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:43.231: INFO: Lookups using dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2826 wheezy_tcp@dns-test-service.dns-2826 wheezy_udp@dns-test-service.dns-2826.svc wheezy_tcp@dns-test-service.dns-2826.svc wheezy_udp@_http._tcp.dns-test-service.dns-2826.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2826.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2826 jessie_tcp@dns-test-service.dns-2826 jessie_udp@dns-test-service.dns-2826.svc jessie_tcp@dns-test-service.dns-2826.svc jessie_udp@_http._tcp.dns-test-service.dns-2826.svc jessie_tcp@_http._tcp.dns-test-service.dns-2826.svc]

May  4 12:22:48.132: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.136: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.140: INFO: Unable to read wheezy_udp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.143: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.146: INFO: Unable to read wheezy_udp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.149: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.152: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.154: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.176: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.179: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.183: INFO: Unable to read jessie_udp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.188: INFO: Unable to read jessie_tcp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.192: INFO: Unable to read jessie_udp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.196: INFO: Unable to read jessie_tcp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.198: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.201: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:48.218: INFO: Lookups using dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2826 wheezy_tcp@dns-test-service.dns-2826 wheezy_udp@dns-test-service.dns-2826.svc wheezy_tcp@dns-test-service.dns-2826.svc wheezy_udp@_http._tcp.dns-test-service.dns-2826.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2826.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2826 jessie_tcp@dns-test-service.dns-2826 jessie_udp@dns-test-service.dns-2826.svc jessie_tcp@dns-test-service.dns-2826.svc jessie_udp@_http._tcp.dns-test-service.dns-2826.svc jessie_tcp@_http._tcp.dns-test-service.dns-2826.svc]

May  4 12:22:53.131: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.134: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.137: INFO: Unable to read wheezy_udp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.140: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.142: INFO: Unable to read wheezy_udp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.146: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.149: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.151: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.171: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.173: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.175: INFO: Unable to read jessie_udp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.178: INFO: Unable to read jessie_tcp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.180: INFO: Unable to read jessie_udp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.182: INFO: Unable to read jessie_tcp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.185: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.188: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:53.207: INFO: Lookups using dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2826 wheezy_tcp@dns-test-service.dns-2826 wheezy_udp@dns-test-service.dns-2826.svc wheezy_tcp@dns-test-service.dns-2826.svc wheezy_udp@_http._tcp.dns-test-service.dns-2826.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2826.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2826 jessie_tcp@dns-test-service.dns-2826 jessie_udp@dns-test-service.dns-2826.svc jessie_tcp@dns-test-service.dns-2826.svc jessie_udp@_http._tcp.dns-test-service.dns-2826.svc jessie_tcp@_http._tcp.dns-test-service.dns-2826.svc]

May  4 12:22:58.173: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.177: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.180: INFO: Unable to read wheezy_udp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.182: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.185: INFO: Unable to read wheezy_udp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.187: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.190: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.192: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.212: INFO: Unable to read jessie_udp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.215: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.218: INFO: Unable to read jessie_udp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.221: INFO: Unable to read jessie_tcp@dns-test-service.dns-2826 from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.224: INFO: Unable to read jessie_udp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.227: INFO: Unable to read jessie_tcp@dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.230: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.233: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2826.svc from pod dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f: the server could not find the requested resource (get pods dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f)
May  4 12:22:58.251: INFO: Lookups using dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-2826 wheezy_tcp@dns-test-service.dns-2826 wheezy_udp@dns-test-service.dns-2826.svc wheezy_tcp@dns-test-service.dns-2826.svc wheezy_udp@_http._tcp.dns-test-service.dns-2826.svc wheezy_tcp@_http._tcp.dns-test-service.dns-2826.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-2826 jessie_tcp@dns-test-service.dns-2826 jessie_udp@dns-test-service.dns-2826.svc jessie_tcp@dns-test-service.dns-2826.svc jessie_udp@_http._tcp.dns-test-service.dns-2826.svc jessie_tcp@_http._tcp.dns-test-service.dns-2826.svc]

May  4 12:23:03.230: INFO: DNS probes using dns-2826/dns-test-3722bedc-dd0b-45e5-9e46-b9e23e03d85f succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:23:04.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2826" for this suite.

• [SLOW TEST:37.818 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":256,"skipped":4465,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:23:04.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on tmpfs
May  4 12:23:04.525: INFO: Waiting up to 5m0s for pod "pod-c97dcc1d-e11a-4e72-b147-8caa248976c5" in namespace "emptydir-8590" to be "Succeeded or Failed"
May  4 12:23:04.532: INFO: Pod "pod-c97dcc1d-e11a-4e72-b147-8caa248976c5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.335963ms
May  4 12:23:06.603: INFO: Pod "pod-c97dcc1d-e11a-4e72-b147-8caa248976c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078021927s
May  4 12:23:08.608: INFO: Pod "pod-c97dcc1d-e11a-4e72-b147-8caa248976c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082391627s
STEP: Saw pod success
May  4 12:23:08.608: INFO: Pod "pod-c97dcc1d-e11a-4e72-b147-8caa248976c5" satisfied condition "Succeeded or Failed"
May  4 12:23:08.611: INFO: Trying to get logs from node kali-worker2 pod pod-c97dcc1d-e11a-4e72-b147-8caa248976c5 container test-container: 
STEP: delete the pod
May  4 12:23:08.641: INFO: Waiting for pod pod-c97dcc1d-e11a-4e72-b147-8caa248976c5 to disappear
May  4 12:23:08.660: INFO: Pod pod-c97dcc1d-e11a-4e72-b147-8caa248976c5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:23:08.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8590" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4496,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:23:08.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:23:09.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5017" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":258,"skipped":4514,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:23:09.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-23dd5ef2-3dcb-4384-9024-31599de223b9
STEP: Creating secret with name s-test-opt-upd-deb4a66f-083e-4891-8aad-f3e432cd771f
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-23dd5ef2-3dcb-4384-9024-31599de223b9
STEP: Updating secret s-test-opt-upd-deb4a66f-083e-4891-8aad-f3e432cd771f
STEP: Creating secret with name s-test-opt-create-25119322-ac32-494e-8402-937b5c67d2df
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:24:37.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9104" for this suite.

• [SLOW TEST:88.720 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4521,"failed":0}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:24:37.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-52616f92-d731-4955-bfc8-c14329f5bbbc
STEP: Creating a pod to test consume secrets
May  4 12:24:37.830: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ee5a1e97-1bec-4e4d-81ca-7022cd08e6f3" in namespace "projected-9299" to be "Succeeded or Failed"
May  4 12:24:37.833: INFO: Pod "pod-projected-secrets-ee5a1e97-1bec-4e4d-81ca-7022cd08e6f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.7496ms
May  4 12:24:39.840: INFO: Pod "pod-projected-secrets-ee5a1e97-1bec-4e4d-81ca-7022cd08e6f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010007644s
May  4 12:24:41.844: INFO: Pod "pod-projected-secrets-ee5a1e97-1bec-4e4d-81ca-7022cd08e6f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014571782s
STEP: Saw pod success
May  4 12:24:41.844: INFO: Pod "pod-projected-secrets-ee5a1e97-1bec-4e4d-81ca-7022cd08e6f3" satisfied condition "Succeeded or Failed"
May  4 12:24:41.848: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-ee5a1e97-1bec-4e4d-81ca-7022cd08e6f3 container projected-secret-volume-test: 
STEP: delete the pod
May  4 12:24:41.874: INFO: Waiting for pod pod-projected-secrets-ee5a1e97-1bec-4e4d-81ca-7022cd08e6f3 to disappear
May  4 12:24:41.892: INFO: Pod pod-projected-secrets-ee5a1e97-1bec-4e4d-81ca-7022cd08e6f3 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:24:41.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9299" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":260,"skipped":4522,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:24:41.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-881125bc-5389-44f1-b35f-0f1ccc062bc5
STEP: Creating a pod to test consume configMaps
May  4 12:24:42.265: INFO: Waiting up to 5m0s for pod "pod-configmaps-943c451d-6170-49cf-9c5b-4ae791f340a6" in namespace "configmap-4695" to be "Succeeded or Failed"
May  4 12:24:42.269: INFO: Pod "pod-configmaps-943c451d-6170-49cf-9c5b-4ae791f340a6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.727699ms
May  4 12:24:44.472: INFO: Pod "pod-configmaps-943c451d-6170-49cf-9c5b-4ae791f340a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.207056544s
May  4 12:24:46.476: INFO: Pod "pod-configmaps-943c451d-6170-49cf-9c5b-4ae791f340a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210675285s
May  4 12:24:48.480: INFO: Pod "pod-configmaps-943c451d-6170-49cf-9c5b-4ae791f340a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.215257315s
STEP: Saw pod success
May  4 12:24:48.481: INFO: Pod "pod-configmaps-943c451d-6170-49cf-9c5b-4ae791f340a6" satisfied condition "Succeeded or Failed"
May  4 12:24:48.484: INFO: Trying to get logs from node kali-worker pod pod-configmaps-943c451d-6170-49cf-9c5b-4ae791f340a6 container configmap-volume-test: 
STEP: delete the pod
May  4 12:24:48.522: INFO: Waiting for pod pod-configmaps-943c451d-6170-49cf-9c5b-4ae791f340a6 to disappear
May  4 12:24:48.547: INFO: Pod pod-configmaps-943c451d-6170-49cf-9c5b-4ae791f340a6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:24:48.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4695" for this suite.

• [SLOW TEST:6.656 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4531,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:24:48.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
May  4 12:24:48.644: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eb133fa5-96bb-42e9-ae17-d6a5b2c84093" in namespace "projected-9241" to be "Succeeded or Failed"
May  4 12:24:48.666: INFO: Pod "downwardapi-volume-eb133fa5-96bb-42e9-ae17-d6a5b2c84093": Phase="Pending", Reason="", readiness=false. Elapsed: 21.286612ms
May  4 12:24:50.712: INFO: Pod "downwardapi-volume-eb133fa5-96bb-42e9-ae17-d6a5b2c84093": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067312354s
May  4 12:24:52.716: INFO: Pod "downwardapi-volume-eb133fa5-96bb-42e9-ae17-d6a5b2c84093": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.07183197s
STEP: Saw pod success
May  4 12:24:52.716: INFO: Pod "downwardapi-volume-eb133fa5-96bb-42e9-ae17-d6a5b2c84093" satisfied condition "Succeeded or Failed"
May  4 12:24:52.719: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-eb133fa5-96bb-42e9-ae17-d6a5b2c84093 container client-container: 
STEP: delete the pod
May  4 12:24:52.810: INFO: Waiting for pod downwardapi-volume-eb133fa5-96bb-42e9-ae17-d6a5b2c84093 to disappear
May  4 12:24:52.816: INFO: Pod downwardapi-volume-eb133fa5-96bb-42e9-ae17-d6a5b2c84093 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:24:52.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9241" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4536,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:24:52.823: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3530
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-3530
STEP: Creating statefulset with conflicting port in namespace statefulset-3530
STEP: Waiting until pod test-pod will start running in namespace statefulset-3530
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3530
May  4 12:24:59.074: INFO: Observed stateful pod in namespace: statefulset-3530, name: ss-0, uid: 6d329846-cae8-41a8-9df1-5d2d6e77c302, status phase: Pending. Waiting for statefulset controller to delete.
May  4 12:24:59.340: INFO: Observed stateful pod in namespace: statefulset-3530, name: ss-0, uid: 6d329846-cae8-41a8-9df1-5d2d6e77c302, status phase: Failed. Waiting for statefulset controller to delete.
May  4 12:24:59.348: INFO: Observed stateful pod in namespace: statefulset-3530, name: ss-0, uid: 6d329846-cae8-41a8-9df1-5d2d6e77c302, status phase: Failed. Waiting for statefulset controller to delete.
May  4 12:24:59.370: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3530
STEP: Removing pod with conflicting port in namespace statefulset-3530
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3530 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
May  4 12:25:03.478: INFO: Deleting all statefulset in ns statefulset-3530
May  4 12:25:03.480: INFO: Scaling statefulset ss to 0
May  4 12:25:23.495: INFO: Waiting for statefulset status.replicas updated to 0
May  4 12:25:23.499: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:25:23.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3530" for this suite.

• [SLOW TEST:30.698 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":263,"skipped":4555,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:25:23.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-6827
STEP: creating a selector
STEP: Creating the service pods in kubernetes
May  4 12:25:23.593: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
May  4 12:25:23.667: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  4 12:25:25.671: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
May  4 12:25:27.672: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  4 12:25:29.672: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  4 12:25:31.672: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  4 12:25:33.671: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  4 12:25:35.672: INFO: The status of Pod netserver-0 is Running (Ready = false)
May  4 12:25:37.671: INFO: The status of Pod netserver-0 is Running (Ready = true)
May  4 12:25:37.676: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  4 12:25:39.680: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  4 12:25:41.680: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  4 12:25:43.680: INFO: The status of Pod netserver-1 is Running (Ready = false)
May  4 12:25:45.681: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
May  4 12:25:49.709: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.232:8080/dial?request=hostname&protocol=http&host=10.244.2.23&port=8080&tries=1'] Namespace:pod-network-test-6827 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  4 12:25:49.710: INFO: >>> kubeConfig: /root/.kube/config
I0504 12:25:49.739154       7 log.go:172] (0xc001e3e370) (0xc00176f4a0) Create stream
I0504 12:25:49.739191       7 log.go:172] (0xc001e3e370) (0xc00176f4a0) Stream added, broadcasting: 1
I0504 12:25:49.741575       7 log.go:172] (0xc001e3e370) Reply frame received for 1
I0504 12:25:49.741617       7 log.go:172] (0xc001e3e370) (0xc000db80a0) Create stream
I0504 12:25:49.741630       7 log.go:172] (0xc001e3e370) (0xc000db80a0) Stream added, broadcasting: 3
I0504 12:25:49.742705       7 log.go:172] (0xc001e3e370) Reply frame received for 3
I0504 12:25:49.742745       7 log.go:172] (0xc001e3e370) (0xc0023c6be0) Create stream
I0504 12:25:49.742764       7 log.go:172] (0xc001e3e370) (0xc0023c6be0) Stream added, broadcasting: 5
I0504 12:25:49.743798       7 log.go:172] (0xc001e3e370) Reply frame received for 5
I0504 12:25:49.826397       7 log.go:172] (0xc001e3e370) Data frame received for 3
I0504 12:25:49.826431       7 log.go:172] (0xc000db80a0) (3) Data frame handling
I0504 12:25:49.826449       7 log.go:172] (0xc000db80a0) (3) Data frame sent
I0504 12:25:49.826795       7 log.go:172] (0xc001e3e370) Data frame received for 3
I0504 12:25:49.826828       7 log.go:172] (0xc000db80a0) (3) Data frame handling
I0504 12:25:49.826967       7 log.go:172] (0xc001e3e370) Data frame received for 5
I0504 12:25:49.826986       7 log.go:172] (0xc0023c6be0) (5) Data frame handling
I0504 12:25:49.828528       7 log.go:172] (0xc001e3e370) Data frame received for 1
I0504 12:25:49.828544       7 log.go:172] (0xc00176f4a0) (1) Data frame handling
I0504 12:25:49.828551       7 log.go:172] (0xc00176f4a0) (1) Data frame sent
I0504 12:25:49.828561       7 log.go:172] (0xc001e3e370) (0xc00176f4a0) Stream removed, broadcasting: 1
I0504 12:25:49.828630       7 log.go:172] (0xc001e3e370) (0xc00176f4a0) Stream removed, broadcasting: 1
I0504 12:25:49.828649       7 log.go:172] (0xc001e3e370) (0xc000db80a0) Stream removed, broadcasting: 3
I0504 12:25:49.828663       7 log.go:172] (0xc001e3e370) (0xc0023c6be0) Stream removed, broadcasting: 5
May  4 12:25:49.828: INFO: Waiting for responses: map[]
I0504 12:25:49.828754       7 log.go:172] (0xc001e3e370) Go away received
May  4 12:25:49.855: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.232:8080/dial?request=hostname&protocol=http&host=10.244.1.231&port=8080&tries=1'] Namespace:pod-network-test-6827 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
May  4 12:25:49.856: INFO: >>> kubeConfig: /root/.kube/config
I0504 12:25:49.891478       7 log.go:172] (0xc001e3ea50) (0xc001e16460) Create stream
I0504 12:25:49.891510       7 log.go:172] (0xc001e3ea50) (0xc001e16460) Stream added, broadcasting: 1
I0504 12:25:49.893738       7 log.go:172] (0xc001e3ea50) Reply frame received for 1
I0504 12:25:49.893768       7 log.go:172] (0xc001e3ea50) (0xc000db86e0) Create stream
I0504 12:25:49.893783       7 log.go:172] (0xc001e3ea50) (0xc000db86e0) Stream added, broadcasting: 3
I0504 12:25:49.894770       7 log.go:172] (0xc001e3ea50) Reply frame received for 3
I0504 12:25:49.894808       7 log.go:172] (0xc001e3ea50) (0xc0029d5900) Create stream
I0504 12:25:49.894823       7 log.go:172] (0xc001e3ea50) (0xc0029d5900) Stream added, broadcasting: 5
I0504 12:25:49.895966       7 log.go:172] (0xc001e3ea50) Reply frame received for 5
I0504 12:25:49.957037       7 log.go:172] (0xc001e3ea50) Data frame received for 3
I0504 12:25:49.957074       7 log.go:172] (0xc000db86e0) (3) Data frame handling
I0504 12:25:49.957096       7 log.go:172] (0xc000db86e0) (3) Data frame sent
I0504 12:25:49.957970       7 log.go:172] (0xc001e3ea50) Data frame received for 5
I0504 12:25:49.957998       7 log.go:172] (0xc001e3ea50) Data frame received for 3
I0504 12:25:49.958022       7 log.go:172] (0xc000db86e0) (3) Data frame handling
I0504 12:25:49.958049       7 log.go:172] (0xc0029d5900) (5) Data frame handling
I0504 12:25:49.959548       7 log.go:172] (0xc001e3ea50) Data frame received for 1
I0504 12:25:49.959595       7 log.go:172] (0xc001e16460) (1) Data frame handling
I0504 12:25:49.959638       7 log.go:172] (0xc001e16460) (1) Data frame sent
I0504 12:25:49.959673       7 log.go:172] (0xc001e3ea50) (0xc001e16460) Stream removed, broadcasting: 1
I0504 12:25:49.959701       7 log.go:172] (0xc001e3ea50) Go away received
I0504 12:25:49.959842       7 log.go:172] (0xc001e3ea50) (0xc001e16460) Stream removed, broadcasting: 1
I0504 12:25:49.959860       7 log.go:172] (0xc001e3ea50) (0xc000db86e0) Stream removed, broadcasting: 3
I0504 12:25:49.959869       7 log.go:172] (0xc001e3ea50) (0xc0029d5900) Stream removed, broadcasting: 5
May  4 12:25:49.959: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:25:49.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6827" for this suite.

• [SLOW TEST:26.449 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4582,"failed":0}
SSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:25:49.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
May  4 12:25:50.049: INFO: Waiting up to 5m0s for pod "downward-api-73b86260-cefd-4cc5-859f-7a8922dcf64c" in namespace "downward-api-9586" to be "Succeeded or Failed"
May  4 12:25:50.077: INFO: Pod "downward-api-73b86260-cefd-4cc5-859f-7a8922dcf64c": Phase="Pending", Reason="", readiness=false. Elapsed: 28.276263ms
May  4 12:25:52.144: INFO: Pod "downward-api-73b86260-cefd-4cc5-859f-7a8922dcf64c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094582233s
May  4 12:25:54.149: INFO: Pod "downward-api-73b86260-cefd-4cc5-859f-7a8922dcf64c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099680239s
STEP: Saw pod success
May  4 12:25:54.149: INFO: Pod "downward-api-73b86260-cefd-4cc5-859f-7a8922dcf64c" satisfied condition "Succeeded or Failed"
May  4 12:25:54.152: INFO: Trying to get logs from node kali-worker pod downward-api-73b86260-cefd-4cc5-859f-7a8922dcf64c container dapi-container: 
STEP: delete the pod
May  4 12:25:54.204: INFO: Waiting for pod downward-api-73b86260-cefd-4cc5-859f-7a8922dcf64c to disappear
May  4 12:25:54.216: INFO: Pod downward-api-73b86260-cefd-4cc5-859f-7a8922dcf64c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:25:54.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9586" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":265,"skipped":4586,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:25:54.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:25:59.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6320" for this suite.

• [SLOW TEST:5.158 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":266,"skipped":4599,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:25:59.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
May  4 12:26:00.198: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
May  4 12:26:02.366: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191960, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191960, loc:(*time.Location)(0x7b200c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191960, loc:(*time.Location)(0x7b200c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63724191960, loc:(*time.Location)(0x7b200c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
May  4 12:26:05.425: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:26:05.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3190" for this suite.
STEP: Destroying namespace "webhook-3190-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.217 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":267,"skipped":4599,"failed":0}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:26:05.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 12:26:05.698: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:26:06.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4295" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":268,"skipped":4599,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:26:06.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
May  4 12:26:06.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
May  4 12:26:09.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6609 create -f -'
May  4 12:26:12.963: INFO: stderr: ""
May  4 12:26:12.963: INFO: stdout: "e2e-test-crd-publish-openapi-8595-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May  4 12:26:12.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6609 delete e2e-test-crd-publish-openapi-8595-crds test-foo'
May  4 12:26:13.079: INFO: stderr: ""
May  4 12:26:13.079: INFO: stdout: "e2e-test-crd-publish-openapi-8595-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
May  4 12:26:13.079: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6609 apply -f -'
May  4 12:26:13.325: INFO: stderr: ""
May  4 12:26:13.325: INFO: stdout: "e2e-test-crd-publish-openapi-8595-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
May  4 12:26:13.325: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6609 delete e2e-test-crd-publish-openapi-8595-crds test-foo'
May  4 12:26:13.450: INFO: stderr: ""
May  4 12:26:13.450: INFO: stdout: "e2e-test-crd-publish-openapi-8595-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
May  4 12:26:13.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6609 create -f -'
May  4 12:26:13.692: INFO: rc: 1
May  4 12:26:13.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6609 apply -f -'
May  4 12:26:13.917: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
May  4 12:26:13.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6609 create -f -'
May  4 12:26:14.165: INFO: rc: 1
May  4 12:26:14.166: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6609 apply -f -'
May  4 12:26:14.412: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
May  4 12:26:14.412: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8595-crds'
May  4 12:26:14.681: INFO: stderr: ""
May  4 12:26:14.681: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8595-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
May  4 12:26:14.682: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8595-crds.metadata'
May  4 12:26:14.909: INFO: stderr: ""
May  4 12:26:14.909: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8595-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
May  4 12:26:14.910: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8595-crds.spec'
May  4 12:26:15.158: INFO: stderr: ""
May  4 12:26:15.158: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8595-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
May  4 12:26:15.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8595-crds.spec.bars'
May  4 12:26:15.404: INFO: stderr: ""
May  4 12:26:15.404: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8595-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
May  4 12:26:15.404: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:32772 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8595-crds.spec.bars2'
May  4 12:26:15.692: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:26:18.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6609" for this suite.

• [SLOW TEST:12.293 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":269,"skipped":4617,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:26:18.611: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-84373284-edc9-4797-9516-518d24a3ff17
STEP: Creating a pod to test consume configMaps
May  4 12:26:18.693: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3d884dfc-4fba-4672-9f42-e97c2b854fe0" in namespace "projected-8174" to be "Succeeded or Failed"
May  4 12:26:18.709: INFO: Pod "pod-projected-configmaps-3d884dfc-4fba-4672-9f42-e97c2b854fe0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.582464ms
May  4 12:26:20.712: INFO: Pod "pod-projected-configmaps-3d884dfc-4fba-4672-9f42-e97c2b854fe0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019152774s
May  4 12:26:22.736: INFO: Pod "pod-projected-configmaps-3d884dfc-4fba-4672-9f42-e97c2b854fe0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043011403s
May  4 12:26:24.741: INFO: Pod "pod-projected-configmaps-3d884dfc-4fba-4672-9f42-e97c2b854fe0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047929206s
STEP: Saw pod success
May  4 12:26:24.741: INFO: Pod "pod-projected-configmaps-3d884dfc-4fba-4672-9f42-e97c2b854fe0" satisfied condition "Succeeded or Failed"
May  4 12:26:24.744: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-3d884dfc-4fba-4672-9f42-e97c2b854fe0 container projected-configmap-volume-test: 
STEP: delete the pod
May  4 12:26:24.796: INFO: Waiting for pod pod-projected-configmaps-3d884dfc-4fba-4672-9f42-e97c2b854fe0 to disappear
May  4 12:26:24.802: INFO: Pod pod-projected-configmaps-3d884dfc-4fba-4672-9f42-e97c2b854fe0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:26:24.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8174" for this suite.

• [SLOW TEST:6.199 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4623,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:26:24.811: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
May  4 12:26:24.942: INFO: Waiting up to 5m0s for pod "pod-bcab47e6-aece-44ac-9bec-4cdc60ee9046" in namespace "emptydir-5711" to be "Succeeded or Failed"
May  4 12:26:24.995: INFO: Pod "pod-bcab47e6-aece-44ac-9bec-4cdc60ee9046": Phase="Pending", Reason="", readiness=false. Elapsed: 52.600176ms
May  4 12:26:27.210: INFO: Pod "pod-bcab47e6-aece-44ac-9bec-4cdc60ee9046": Phase="Pending", Reason="", readiness=false. Elapsed: 2.267481877s
May  4 12:26:29.214: INFO: Pod "pod-bcab47e6-aece-44ac-9bec-4cdc60ee9046": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.272125087s
STEP: Saw pod success
May  4 12:26:29.214: INFO: Pod "pod-bcab47e6-aece-44ac-9bec-4cdc60ee9046" satisfied condition "Succeeded or Failed"
May  4 12:26:29.217: INFO: Trying to get logs from node kali-worker2 pod pod-bcab47e6-aece-44ac-9bec-4cdc60ee9046 container test-container: 
STEP: delete the pod
May  4 12:26:29.342: INFO: Waiting for pod pod-bcab47e6-aece-44ac-9bec-4cdc60ee9046 to disappear
May  4 12:26:29.359: INFO: Pod pod-bcab47e6-aece-44ac-9bec-4cdc60ee9046 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:26:29.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5711" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4646,"failed":0}
S
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:26:29.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6582.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6582.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6582.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6582.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6582.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6582.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6582.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6582.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6582.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6582.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6582.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 225.137.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.137.225_udp@PTR;check="$$(dig +tcp +noall +answer +search 225.137.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.137.225_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6582.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6582.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6582.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6582.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6582.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6582.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6582.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6582.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6582.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6582.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6582.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 225.137.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.137.225_udp@PTR;check="$$(dig +tcp +noall +answer +search 225.137.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.137.225_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
May  4 12:26:35.612: INFO: Unable to read wheezy_udp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:35.615: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:35.617: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:35.620: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:35.640: INFO: Unable to read jessie_udp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:35.643: INFO: Unable to read jessie_tcp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:35.646: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:35.649: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:35.668: INFO: Lookups using dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031 failed for: [wheezy_udp@dns-test-service.dns-6582.svc.cluster.local wheezy_tcp@dns-test-service.dns-6582.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local jessie_udp@dns-test-service.dns-6582.svc.cluster.local jessie_tcp@dns-test-service.dns-6582.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local]

May  4 12:26:40.673: INFO: Unable to read wheezy_udp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:40.678: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:40.682: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:40.686: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:40.710: INFO: Unable to read jessie_udp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:40.712: INFO: Unable to read jessie_tcp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:40.715: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:40.717: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:40.733: INFO: Lookups using dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031 failed for: [wheezy_udp@dns-test-service.dns-6582.svc.cluster.local wheezy_tcp@dns-test-service.dns-6582.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local jessie_udp@dns-test-service.dns-6582.svc.cluster.local jessie_tcp@dns-test-service.dns-6582.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local]

May  4 12:26:45.673: INFO: Unable to read wheezy_udp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:45.677: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:45.681: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:45.684: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:45.709: INFO: Unable to read jessie_udp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:45.711: INFO: Unable to read jessie_tcp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:45.714: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:45.716: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:45.730: INFO: Lookups using dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031 failed for: [wheezy_udp@dns-test-service.dns-6582.svc.cluster.local wheezy_tcp@dns-test-service.dns-6582.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local jessie_udp@dns-test-service.dns-6582.svc.cluster.local jessie_tcp@dns-test-service.dns-6582.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local]

May  4 12:26:50.673: INFO: Unable to read wheezy_udp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:50.677: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:50.680: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:50.683: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:50.703: INFO: Unable to read jessie_udp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:50.706: INFO: Unable to read jessie_tcp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:50.709: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:50.712: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:50.729: INFO: Lookups using dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031 failed for: [wheezy_udp@dns-test-service.dns-6582.svc.cluster.local wheezy_tcp@dns-test-service.dns-6582.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local jessie_udp@dns-test-service.dns-6582.svc.cluster.local jessie_tcp@dns-test-service.dns-6582.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local]

May  4 12:26:55.673: INFO: Unable to read wheezy_udp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:55.677: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:55.681: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:55.684: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:55.707: INFO: Unable to read jessie_udp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:55.710: INFO: Unable to read jessie_tcp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:55.713: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:55.717: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:26:55.742: INFO: Lookups using dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031 failed for: [wheezy_udp@dns-test-service.dns-6582.svc.cluster.local wheezy_tcp@dns-test-service.dns-6582.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local jessie_udp@dns-test-service.dns-6582.svc.cluster.local jessie_tcp@dns-test-service.dns-6582.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local]

May  4 12:27:00.673: INFO: Unable to read wheezy_udp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:27:00.678: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:27:00.681: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:27:00.685: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:27:00.705: INFO: Unable to read jessie_udp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:27:00.708: INFO: Unable to read jessie_tcp@dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:27:00.712: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:27:00.715: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local from pod dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031: the server could not find the requested resource (get pods dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031)
May  4 12:27:00.735: INFO: Lookups using dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031 failed for: [wheezy_udp@dns-test-service.dns-6582.svc.cluster.local wheezy_tcp@dns-test-service.dns-6582.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local jessie_udp@dns-test-service.dns-6582.svc.cluster.local jessie_tcp@dns-test-service.dns-6582.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6582.svc.cluster.local]

May  4 12:27:05.733: INFO: DNS probes using dns-6582/dns-test-6d2dd7b1-5b4b-401a-95f8-e191f0bf9031 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:27:06.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6582" for this suite.

• [SLOW TEST:37.137 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":272,"skipped":4647,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:27:06.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
May  4 12:27:06.587: INFO: Waiting up to 5m0s for pod "pod-b348c31b-f543-4063-9d10-d460d8370b3a" in namespace "emptydir-6359" to be "Succeeded or Failed"
May  4 12:27:06.594: INFO: Pod "pod-b348c31b-f543-4063-9d10-d460d8370b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.224408ms
May  4 12:27:08.653: INFO: Pod "pod-b348c31b-f543-4063-9d10-d460d8370b3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066532256s
May  4 12:27:10.658: INFO: Pod "pod-b348c31b-f543-4063-9d10-d460d8370b3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071449633s
STEP: Saw pod success
May  4 12:27:10.658: INFO: Pod "pod-b348c31b-f543-4063-9d10-d460d8370b3a" satisfied condition "Succeeded or Failed"
May  4 12:27:10.661: INFO: Trying to get logs from node kali-worker pod pod-b348c31b-f543-4063-9d10-d460d8370b3a container test-container: 
STEP: delete the pod
May  4 12:27:10.726: INFO: Waiting for pod pod-b348c31b-f543-4063-9d10-d460d8370b3a to disappear
May  4 12:27:10.731: INFO: Pod pod-b348c31b-f543-4063-9d10-d460d8370b3a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:27:10.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6359" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4700,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:27:10.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-a6b84110-081a-4c82-b053-eaaf565f6af8
STEP: Creating a pod to test consume configMaps
May  4 12:27:10.852: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b1e0f979-a9b3-4536-a4b5-3ceb3ff1b7f9" in namespace "projected-1219" to be "Succeeded or Failed"
May  4 12:27:10.923: INFO: Pod "pod-projected-configmaps-b1e0f979-a9b3-4536-a4b5-3ceb3ff1b7f9": Phase="Pending", Reason="", readiness=false. Elapsed: 71.249224ms
May  4 12:27:12.928: INFO: Pod "pod-projected-configmaps-b1e0f979-a9b3-4536-a4b5-3ceb3ff1b7f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075762947s
May  4 12:27:14.932: INFO: Pod "pod-projected-configmaps-b1e0f979-a9b3-4536-a4b5-3ceb3ff1b7f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079332468s
STEP: Saw pod success
May  4 12:27:14.932: INFO: Pod "pod-projected-configmaps-b1e0f979-a9b3-4536-a4b5-3ceb3ff1b7f9" satisfied condition "Succeeded or Failed"
May  4 12:27:14.934: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-b1e0f979-a9b3-4536-a4b5-3ceb3ff1b7f9 container projected-configmap-volume-test: 
STEP: delete the pod
May  4 12:27:14.984: INFO: Waiting for pod pod-projected-configmaps-b1e0f979-a9b3-4536-a4b5-3ceb3ff1b7f9 to disappear
May  4 12:27:14.992: INFO: Pod pod-projected-configmaps-b1e0f979-a9b3-4536-a4b5-3ceb3ff1b7f9 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:27:14.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1219" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4706,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
May  4 12:27:15.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
May  4 12:27:26.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7984" for this suite.

• [SLOW TEST:11.170 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":275,"skipped":4709,"failed":0}
SSSSSSSSMay  4 12:27:26.203: INFO: Running AfterSuite actions on all nodes
May  4 12:27:26.203: INFO: Running AfterSuite actions on node 1
May  4 12:27:26.203: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 4899.227 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS