I0727 10:31:37.353687 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0727 10:31:37.353931 7 e2e.go:124] Starting e2e run "f4269bb3-2b14-484b-968a-e6796a7b9759" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1595845896 - Will randomize all specs Will run 275 of 4992 specs Jul 27 10:31:37.412: INFO: >>> kubeConfig: /root/.kube/config Jul 27 10:31:37.416: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jul 27 10:31:37.434: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jul 27 10:31:37.468: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jul 27 10:31:37.468: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jul 27 10:31:37.468: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jul 27 10:31:37.473: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jul 27 10:31:37.473: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jul 27 10:31:37.474: INFO: e2e test version: v1.18.5 Jul 27 10:31:37.474: INFO: kube-apiserver version: v1.18.4 Jul 27 10:31:37.474: INFO: >>> kubeConfig: /root/.kube/config Jul 27 10:31:37.478: INFO: Cluster IP family: ipv4 SSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:31:37.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe Jul 27 10:31:37.542: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-3750eab0-5d42-407c-8608-4b10cc364288 in namespace container-probe-4235 Jul 27 10:31:43.567: INFO: Started pod liveness-3750eab0-5d42-407c-8608-4b10cc364288 in namespace container-probe-4235 STEP: checking the pod's current state and verifying that restartCount is present Jul 27 10:31:43.570: INFO: Initial restart count of pod liveness-3750eab0-5d42-407c-8608-4b10cc364288 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:35:46.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4235" for this suite. • [SLOW TEST:249.264 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":7,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:35:46.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-9624 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9624 STEP: creating replication controller externalsvc in namespace services-9624 I0727 10:35:48.410196 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-9624, replica count: 2 I0727 10:35:51.460681 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0727 10:35:54.460933 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0727 10:35:57.461155 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0727 10:36:00.461401 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName Jul 27 10:36:00.516: INFO: Creating new exec pod Jul 27 10:36:10.587: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-9624 execpodrdmpf -- /bin/sh -x -c nslookup clusterip-service' Jul 27 10:36:13.044: INFO: stderr: "I0727 10:36:12.977492 29 log.go:172] (0xc0004080b0) (0xc0005960a0) Create stream\nI0727 10:36:12.977550 29 log.go:172] (0xc0004080b0) (0xc0005960a0) Stream added, broadcasting: 1\nI0727 10:36:12.979280 29 log.go:172] (0xc0004080b0) Reply frame received for 1\nI0727 10:36:12.979329 29 log.go:172] (0xc0004080b0) (0xc000bca000) Create stream\nI0727 10:36:12.979346 29 log.go:172] (0xc0004080b0) (0xc000bca000) Stream added, broadcasting: 3\nI0727 10:36:12.980277 29 log.go:172] (0xc0004080b0) Reply frame received for 3\nI0727 10:36:12.980332 29 log.go:172] (0xc0004080b0) (0xc000bca0a0) Create stream\nI0727 10:36:12.980351 29 log.go:172] (0xc0004080b0) (0xc000bca0a0) Stream added, broadcasting: 5\nI0727 10:36:12.981132 29 log.go:172] (0xc0004080b0) Reply frame received for 5\nI0727 10:36:13.033796 29 log.go:172] (0xc0004080b0) Data frame received for 5\nI0727 10:36:13.033822 29 log.go:172] (0xc000bca0a0) (5) Data frame handling\nI0727 10:36:13.033848 29 log.go:172] (0xc000bca0a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0727 10:36:13.039277 29 log.go:172] (0xc0004080b0) Data frame received for 3\nI0727 10:36:13.039291 29 log.go:172] (0xc000bca000) (3) Data frame handling\nI0727 10:36:13.039300 29 log.go:172] (0xc000bca000) (3) Data frame sent\nI0727 10:36:13.039889 29 log.go:172] (0xc0004080b0) Data frame received for 3\nI0727 10:36:13.039897 29 log.go:172] (0xc000bca000) (3) Data frame handling\nI0727 10:36:13.039906 29 log.go:172] (0xc000bca000) (3) Data frame sent\nI0727 10:36:13.040207 29 log.go:172] (0xc0004080b0) Data frame received for 5\nI0727 10:36:13.040225 29 log.go:172] (0xc0004080b0) Data frame received for 3\nI0727 10:36:13.040240 29 log.go:172] (0xc000bca000) (3) Data frame handling\nI0727 10:36:13.040255 29 log.go:172] (0xc000bca0a0) (5) Data frame handling\nI0727 10:36:13.041312 29 log.go:172] (0xc0004080b0) Data frame received for 1\nI0727 10:36:13.041323 29 log.go:172] (0xc0005960a0) (1) Data frame handling\nI0727 10:36:13.041329 29 log.go:172] (0xc0005960a0) (1) Data frame sent\nI0727 10:36:13.041335 29 log.go:172] (0xc0004080b0) (0xc0005960a0) Stream removed, broadcasting: 1\nI0727 10:36:13.041342 29 log.go:172] (0xc0004080b0) Go away received\nI0727 10:36:13.041573 29 log.go:172] (0xc0004080b0) (0xc0005960a0) Stream removed, broadcasting: 1\nI0727 10:36:13.041582 29 log.go:172] (0xc0004080b0) (0xc000bca000) Stream removed, broadcasting: 3\nI0727 10:36:13.041587 29 log.go:172] (0xc0004080b0) (0xc000bca0a0) Stream removed, broadcasting: 5\n" Jul 27 10:36:13.044: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-9624.svc.cluster.local\tcanonical name = externalsvc.services-9624.svc.cluster.local.\nName:\texternalsvc.services-9624.svc.cluster.local\nAddress: 10.102.192.64\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9624, will wait for the garbage collector to delete the pods Jul 27 10:36:13.102: INFO: Deleting ReplicationController externalsvc took: 5.048697ms Jul 27 10:36:13.402: INFO: Terminating ReplicationController externalsvc pods took: 300.174938ms Jul 27 10:36:23.516: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:36:23.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9624" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:36.860 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":2,"skipped":23,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:36:23.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Jul 27 10:36:32.687: INFO: Successfully updated pod "adopt-release-dwp85" STEP: Checking that the Job readopts the Pod Jul 27 10:36:32.687: INFO: Waiting up to 15m0s for pod "adopt-release-dwp85" in namespace "job-4293" to be "adopted" Jul 27 10:36:32.925: INFO: Pod "adopt-release-dwp85": Phase="Running", Reason="", readiness=true. Elapsed: 237.517883ms Jul 27 10:36:34.929: INFO: Pod "adopt-release-dwp85": Phase="Running", Reason="", readiness=true. Elapsed: 2.241835531s Jul 27 10:36:34.929: INFO: Pod "adopt-release-dwp85" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Jul 27 10:36:35.438: INFO: Successfully updated pod "adopt-release-dwp85" STEP: Checking that the Job releases the Pod Jul 27 10:36:35.438: INFO: Waiting up to 15m0s for pod "adopt-release-dwp85" in namespace "job-4293" to be "released" Jul 27 10:36:35.493: INFO: Pod "adopt-release-dwp85": Phase="Running", Reason="", readiness=true. Elapsed: 55.123215ms Jul 27 10:36:37.577: INFO: Pod "adopt-release-dwp85": Phase="Running", Reason="", readiness=true. Elapsed: 2.13870558s Jul 27 10:36:37.577: INFO: Pod "adopt-release-dwp85" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:36:37.577: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4293" for this suite. • [SLOW TEST:13.982 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":3,"skipped":55,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:36:37.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 27 10:36:38.117: INFO: Waiting up to 5m0s for pod "pod-fed7477c-29b1-4629-94f9-597ce7c76c2d" in namespace "emptydir-6265" to be "Succeeded or Failed" Jul 27 10:36:38.178: INFO: Pod "pod-fed7477c-29b1-4629-94f9-597ce7c76c2d": Phase="Pending", Reason="", readiness=false. Elapsed: 60.401625ms Jul 27 10:36:40.191: INFO: Pod "pod-fed7477c-29b1-4629-94f9-597ce7c76c2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073502633s Jul 27 10:36:42.195: INFO: Pod "pod-fed7477c-29b1-4629-94f9-597ce7c76c2d": Phase="Running", Reason="", readiness=true. Elapsed: 4.077824148s Jul 27 10:36:44.200: INFO: Pod "pod-fed7477c-29b1-4629-94f9-597ce7c76c2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.082338262s STEP: Saw pod success Jul 27 10:36:44.200: INFO: Pod "pod-fed7477c-29b1-4629-94f9-597ce7c76c2d" satisfied condition "Succeeded or Failed" Jul 27 10:36:44.203: INFO: Trying to get logs from node kali-worker pod pod-fed7477c-29b1-4629-94f9-597ce7c76c2d container test-container: STEP: delete the pod Jul 27 10:36:44.274: INFO: Waiting for pod pod-fed7477c-29b1-4629-94f9-597ce7c76c2d to disappear Jul 27 10:36:44.289: INFO: Pod pod-fed7477c-29b1-4629-94f9-597ce7c76c2d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:36:44.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6265" for this suite. • [SLOW TEST:6.710 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":4,"skipped":65,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:36:44.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 10:36:44.390: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:36:48.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3140" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":5,"skipped":102,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:36:48.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Jul 27 10:36:48.602: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:36:57.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3726" for this suite. • [SLOW TEST:8.751 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":6,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:36:57.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Jul 27 10:36:57.392: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 27 10:36:57.412: INFO: Waiting for terminating namespaces to be deleted... Jul 27 10:36:57.414: INFO: Logging pods the kubelet thinks is on node kali-worker before test Jul 27 10:36:57.419: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Jul 27 10:36:57.419: INFO: Container kube-proxy ready: true, restart count 0 Jul 27 10:36:57.419: INFO: adopt-release-dwp85 from job-4293 started at 2020-07-27 10:36:23 +0000 UTC (1 container statuses recorded) Jul 27 10:36:57.419: INFO: Container c ready: true, restart count 0 Jul 27 10:36:57.419: INFO: adopt-release-t9xd9 from job-4293 started at 2020-07-27 10:36:23 +0000 UTC (1 container statuses recorded) Jul 27 10:36:57.419: INFO: Container c ready: true, restart count 0 Jul 27 10:36:57.419: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Jul 27 10:36:57.419: INFO: Container kindnet-cni ready: true, restart count 1 Jul 27 10:36:57.419: INFO: pod-init-edfd8885-5e70-4384-a0c2-6e28af6435da from init-container-3726 started at 2020-07-27 10:36:48 +0000 UTC (1 container statuses recorded) Jul 27 10:36:57.419: INFO: Container run1 ready: false, restart count 0 Jul 27 10:36:57.419: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Jul 27 10:36:57.434: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Jul 27 10:36:57.435: INFO: Container kindnet-cni ready: true, restart count 1 Jul 27 10:36:57.435: INFO: pod-exec-websocket-f9c2b400-00cf-4837-b109-d6f452a70351 from pods-3140 started at 2020-07-27 10:36:44 +0000 UTC (1 container statuses recorded) Jul 27 10:36:57.435: INFO: Container main ready: true, restart count 0 Jul 27 10:36:57.435: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Jul 27 10:36:57.435: INFO: Container kube-proxy ready: true, restart count 0 Jul 27 10:36:57.435: INFO: adopt-release-9rx2t from job-4293 started at 2020-07-27 10:36:35 +0000 UTC (1 container statuses recorded) Jul 27 10:36:57.435: INFO: Container c ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: verifying the node has the label node kali-worker STEP: verifying the node has the label node kali-worker2 Jul 27 10:36:58.611: INFO: Pod adopt-release-9rx2t requesting resource cpu=0m on Node kali-worker2 Jul 27 10:36:58.611: INFO: Pod adopt-release-dwp85 requesting resource cpu=0m on Node kali-worker Jul 27 10:36:58.611: INFO: Pod adopt-release-t9xd9 requesting resource cpu=0m on Node kali-worker Jul 27 10:36:58.611: INFO: Pod kindnet-njbgt requesting resource cpu=100m on Node kali-worker Jul 27 10:36:58.611: INFO: Pod kindnet-pk4xb requesting resource cpu=100m on Node kali-worker2 Jul 27 10:36:58.611: INFO: Pod kube-proxy-qwsfx requesting resource cpu=0m on Node kali-worker Jul 27 10:36:58.611: INFO: Pod kube-proxy-vk6jr requesting resource cpu=0m on Node kali-worker2 Jul 27 10:36:58.611: INFO: Pod pod-exec-websocket-f9c2b400-00cf-4837-b109-d6f452a70351 requesting resource cpu=0m on Node kali-worker2 STEP: Starting Pods to consume most of the cluster CPU. Jul 27 10:36:58.611: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker Jul 27 10:36:58.621: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-49b71c38-f29a-4cc5-8043-0688b8252539.162595ae42e313a1], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7212/filler-pod-49b71c38-f29a-4cc5-8043-0688b8252539 to kali-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-49b71c38-f29a-4cc5-8043-0688b8252539.162595af2168e49b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-49b71c38-f29a-4cc5-8043-0688b8252539.162595b055820121], Reason = [Created], Message = [Created container filler-pod-49b71c38-f29a-4cc5-8043-0688b8252539] STEP: Considering event: Type = [Normal], Name = [filler-pod-49b71c38-f29a-4cc5-8043-0688b8252539.162595b06569fd32], Reason = [Started], Message = [Started container filler-pod-49b71c38-f29a-4cc5-8043-0688b8252539] STEP: Considering event: Type = [Normal], Name = [filler-pod-d583e0ff-cc8f-449b-a312-c266de40938a.162595ae41c3059d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7212/filler-pod-d583e0ff-cc8f-449b-a312-c266de40938a to kali-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-d583e0ff-cc8f-449b-a312-c266de40938a.162595ae9113137b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-d583e0ff-cc8f-449b-a312-c266de40938a.162595b0405bbaf5], Reason = [Created], Message = [Created container filler-pod-d583e0ff-cc8f-449b-a312-c266de40938a] STEP: Considering event: Type = [Normal], Name = [filler-pod-d583e0ff-cc8f-449b-a312-c266de40938a.162595b05cbf2284], Reason = [Started], Message = [Started container filler-pod-d583e0ff-cc8f-449b-a312-c266de40938a] STEP: Considering event: Type = [Warning], Name = [additional-pod.162595b0991f0373], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node kali-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node kali-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:37:09.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7212" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:12.455 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":275,"completed":7,"skipped":139,"failed":0} [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:37:09.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 27 10:37:09.824: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76381cdd-2ac8-418f-a848-a2cb5e0a74f5" in namespace "downward-api-1841" to be "Succeeded or Failed" Jul 27 10:37:09.829: INFO: Pod "downwardapi-volume-76381cdd-2ac8-418f-a848-a2cb5e0a74f5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.056899ms Jul 27 10:37:11.943: INFO: Pod "downwardapi-volume-76381cdd-2ac8-418f-a848-a2cb5e0a74f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119011479s Jul 27 10:37:14.949: INFO: Pod "downwardapi-volume-76381cdd-2ac8-418f-a848-a2cb5e0a74f5": Phase="Pending", Reason="", readiness=false. Elapsed: 5.125217532s Jul 27 10:37:16.961: INFO: Pod "downwardapi-volume-76381cdd-2ac8-418f-a848-a2cb5e0a74f5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.137162386s Jul 27 10:37:19.160: INFO: Pod "downwardapi-volume-76381cdd-2ac8-418f-a848-a2cb5e0a74f5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.336606155s Jul 27 10:37:22.660: INFO: Pod "downwardapi-volume-76381cdd-2ac8-418f-a848-a2cb5e0a74f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.835947613s STEP: Saw pod success Jul 27 10:37:22.660: INFO: Pod "downwardapi-volume-76381cdd-2ac8-418f-a848-a2cb5e0a74f5" satisfied condition "Succeeded or Failed" Jul 27 10:37:22.787: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-76381cdd-2ac8-418f-a848-a2cb5e0a74f5 container client-container: STEP: delete the pod Jul 27 10:37:23.943: INFO: Waiting for pod downwardapi-volume-76381cdd-2ac8-418f-a848-a2cb5e0a74f5 to disappear Jul 27 10:37:24.165: INFO: Pod downwardapi-volume-76381cdd-2ac8-418f-a848-a2cb5e0a74f5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:37:24.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1841" for this suite. • [SLOW TEST:15.306 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":139,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:37:25.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 27 10:37:46.844: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:37:47.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5984" for this suite. • [SLOW TEST:22.262 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":9,"skipped":152,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:37:47.333: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Jul 27 10:37:47.951: INFO: Waiting up to 5m0s for pod "pod-a9d56a80-88b0-405f-9cf0-4d9da02ee3db" in namespace "emptydir-496" to be "Succeeded or Failed" Jul 27 10:37:49.027: INFO: Pod "pod-a9d56a80-88b0-405f-9cf0-4d9da02ee3db": Phase="Pending", Reason="", readiness=false. Elapsed: 1.076340805s Jul 27 10:37:51.031: INFO: Pod "pod-a9d56a80-88b0-405f-9cf0-4d9da02ee3db": Phase="Pending", Reason="", readiness=false. Elapsed: 3.080147906s Jul 27 10:37:53.063: INFO: Pod "pod-a9d56a80-88b0-405f-9cf0-4d9da02ee3db": Phase="Pending", Reason="", readiness=false. Elapsed: 5.11229185s Jul 27 10:37:55.236: INFO: Pod "pod-a9d56a80-88b0-405f-9cf0-4d9da02ee3db": Phase="Running", Reason="", readiness=true. Elapsed: 7.285583761s Jul 27 10:37:57.240: INFO: Pod "pod-a9d56a80-88b0-405f-9cf0-4d9da02ee3db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.289290968s STEP: Saw pod success Jul 27 10:37:57.240: INFO: Pod "pod-a9d56a80-88b0-405f-9cf0-4d9da02ee3db" satisfied condition "Succeeded or Failed" Jul 27 10:37:57.243: INFO: Trying to get logs from node kali-worker pod pod-a9d56a80-88b0-405f-9cf0-4d9da02ee3db container test-container: STEP: delete the pod Jul 27 10:37:57.576: INFO: Waiting for pod pod-a9d56a80-88b0-405f-9cf0-4d9da02ee3db to disappear Jul 27 10:37:57.602: INFO: Pod pod-a9d56a80-88b0-405f-9cf0-4d9da02ee3db no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:37:57.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-496" for this suite. • [SLOW TEST:10.274 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":10,"skipped":205,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:37:57.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jul 27 10:38:10.037: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1579 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 27 10:38:10.037: INFO: >>> kubeConfig: /root/.kube/config I0727 10:38:10.062721 7 log.go:172] (0xc002da49a0) (0xc00096ed20) Create stream I0727 10:38:10.062740 7 log.go:172] (0xc002da49a0) (0xc00096ed20) Stream added, broadcasting: 1 I0727 10:38:10.064881 7 log.go:172] (0xc002da49a0) Reply frame received for 1 I0727 10:38:10.064904 7 log.go:172] (0xc002da49a0) (0xc00096edc0) Create stream I0727 10:38:10.064912 7 log.go:172] (0xc002da49a0) (0xc00096edc0) Stream added, broadcasting: 3 I0727 10:38:10.065562 7 log.go:172] (0xc002da49a0) Reply frame received for 3 I0727 10:38:10.065595 7 log.go:172] (0xc002da49a0) (0xc00044ae60) Create stream I0727 10:38:10.065606 7 log.go:172] (0xc002da49a0) (0xc00044ae60) Stream added, broadcasting: 5 I0727 10:38:10.066367 7 log.go:172] (0xc002da49a0) Reply frame received for 5 I0727 10:38:10.139968 7 log.go:172] (0xc002da49a0) Data frame received for 3 I0727 10:38:10.139996 7 log.go:172] (0xc00096edc0) (3) Data frame handling I0727 10:38:10.140010 7 log.go:172] (0xc00096edc0) (3) Data frame sent I0727 10:38:10.140021 7 log.go:172] (0xc002da49a0) Data frame received for 3 I0727 10:38:10.140035 7 log.go:172] (0xc00096edc0) (3) Data frame handling I0727 10:38:10.140069 7 log.go:172] (0xc002da49a0) Data frame received for 5 I0727 10:38:10.140081 7 log.go:172] (0xc00044ae60) (5) Data frame handling I0727 10:38:10.141543 7 log.go:172] (0xc002da49a0) Data frame received for 1 I0727 10:38:10.141578 7 log.go:172] (0xc00096ed20) (1) Data frame handling I0727 10:38:10.141600 7 log.go:172] (0xc00096ed20) (1) Data frame sent I0727 10:38:10.141613 7 log.go:172] (0xc002da49a0) (0xc00096ed20) Stream removed, broadcasting: 1 I0727 10:38:10.141716 7 log.go:172] (0xc002da49a0) Go away received I0727 10:38:10.142072 7 log.go:172] (0xc002da49a0) (0xc00096ed20) Stream removed, broadcasting: 1 I0727 10:38:10.142108 7 log.go:172] (0xc002da49a0) (0xc00096edc0) Stream removed, broadcasting: 3 I0727 10:38:10.142135 7 log.go:172] (0xc002da49a0) (0xc00044ae60) Stream removed, broadcasting: 5 Jul 27 10:38:10.142: INFO: Exec stderr: "" Jul 27 10:38:10.142: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1579 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 27 10:38:10.142: INFO: >>> kubeConfig: /root/.kube/config I0727 10:38:10.174646 7 log.go:172] (0xc002da4fd0) (0xc00096f720) Create stream I0727 10:38:10.174675 7 log.go:172] (0xc002da4fd0) (0xc00096f720) Stream added, broadcasting: 1 I0727 10:38:10.176981 7 log.go:172] (0xc002da4fd0) Reply frame received for 1 I0727 10:38:10.177019 7 log.go:172] (0xc002da4fd0) (0xc00044b540) Create stream I0727 10:38:10.177033 7 log.go:172] (0xc002da4fd0) (0xc00044b540) Stream added, broadcasting: 3 I0727 10:38:10.177775 7 log.go:172] (0xc002da4fd0) Reply frame received for 3 I0727 10:38:10.177845 7 log.go:172] (0xc002da4fd0) (0xc0004d4820) Create stream I0727 10:38:10.177870 7 log.go:172] (0xc002da4fd0) (0xc0004d4820) Stream added, broadcasting: 5 I0727 10:38:10.178526 7 log.go:172] (0xc002da4fd0) Reply frame received for 5 I0727 10:38:10.245122 7 log.go:172] (0xc002da4fd0) Data frame received for 5 I0727 10:38:10.245156 7 log.go:172] (0xc002da4fd0) Data frame received for 3 I0727 10:38:10.245184 7 log.go:172] (0xc00044b540) (3) Data frame handling I0727 10:38:10.245200 7 log.go:172] (0xc00044b540) (3) Data frame sent I0727 10:38:10.245211 7 log.go:172] (0xc002da4fd0) Data frame received for 3 I0727 10:38:10.245221 7 log.go:172] (0xc00044b540) (3) Data frame handling I0727 10:38:10.245246 7 log.go:172] (0xc0004d4820) (5) Data frame handling I0727 10:38:10.246489 7 log.go:172] (0xc002da4fd0) Data frame received for 1 I0727 10:38:10.246501 7 log.go:172] (0xc00096f720) (1) Data frame handling I0727 10:38:10.246512 7 log.go:172] (0xc00096f720) (1) Data frame sent I0727 10:38:10.246523 7 log.go:172] (0xc002da4fd0) (0xc00096f720) Stream removed, broadcasting: 1 I0727 10:38:10.246535 7 log.go:172] (0xc002da4fd0) Go away received I0727 10:38:10.246634 7 log.go:172] (0xc002da4fd0) (0xc00096f720) Stream removed, broadcasting: 1 I0727 10:38:10.246648 7 log.go:172] (0xc002da4fd0) (0xc00044b540) Stream removed, broadcasting: 3 I0727 10:38:10.246658 7 log.go:172] (0xc002da4fd0) (0xc0004d4820) Stream removed, broadcasting: 5 Jul 27 10:38:10.246: INFO: Exec stderr: "" Jul 27 10:38:10.246: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1579 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 27 10:38:10.246: INFO: >>> kubeConfig: /root/.kube/config I0727 10:38:10.281338 7 log.go:172] (0xc002965a20) (0xc0004d4e60) Create stream I0727 10:38:10.281359 7 log.go:172] (0xc002965a20) (0xc0004d4e60) Stream added, broadcasting: 1 I0727 10:38:10.283118 7 log.go:172] (0xc002965a20) Reply frame received for 1 I0727 10:38:10.283137 7 log.go:172] (0xc002965a20) (0xc00096f9a0) Create stream I0727 10:38:10.283148 7 log.go:172] (0xc002965a20) (0xc00096f9a0) Stream added, broadcasting: 3 I0727 10:38:10.283689 7 log.go:172] (0xc002965a20) Reply frame received for 3 I0727 10:38:10.283726 7 log.go:172] (0xc002965a20) (0xc00044ba40) Create stream I0727 10:38:10.283741 7 log.go:172] (0xc002965a20) (0xc00044ba40) Stream added, broadcasting: 5 I0727 10:38:10.284255 7 log.go:172] (0xc002965a20) Reply frame received for 5 I0727 10:38:10.334362 7 log.go:172] (0xc002965a20) Data frame received for 5 I0727 10:38:10.334452 7 log.go:172] (0xc00044ba40) (5) Data frame handling I0727 10:38:10.334488 7 log.go:172] (0xc002965a20) Data frame received for 3 I0727 10:38:10.334505 7 log.go:172] (0xc00096f9a0) (3) Data frame handling I0727 10:38:10.334518 7 log.go:172] (0xc00096f9a0) (3) Data frame sent I0727 10:38:10.334533 7 log.go:172] (0xc002965a20) Data frame received for 3 I0727 10:38:10.334544 7 log.go:172] (0xc00096f9a0) (3) Data frame handling I0727 10:38:10.335419 7 log.go:172] (0xc002965a20) Data frame received for 1 I0727 10:38:10.335440 7 log.go:172] (0xc0004d4e60) (1) Data frame handling I0727 10:38:10.335453 7 log.go:172] (0xc0004d4e60) (1) Data frame sent I0727 10:38:10.335462 7 log.go:172] (0xc002965a20) (0xc0004d4e60) Stream removed, broadcasting: 1 I0727 10:38:10.335473 7 log.go:172] (0xc002965a20) Go away received I0727 10:38:10.335569 7 log.go:172] (0xc002965a20) (0xc0004d4e60) Stream removed, broadcasting: 1 I0727 10:38:10.335601 7 log.go:172] (0xc002965a20) (0xc00096f9a0) Stream removed, broadcasting: 3 I0727 10:38:10.335613 7 log.go:172] (0xc002965a20) (0xc00044ba40) Stream removed, broadcasting: 5 Jul 27 10:38:10.335: INFO: Exec stderr: "" Jul 27 10:38:10.335: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1579 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 27 10:38:10.335: INFO: >>> kubeConfig: /root/.kube/config I0727 10:38:10.367309 7 log.go:172] (0xc002c704d0) (0xc000c69720) Create stream I0727 10:38:10.367342 7 log.go:172] (0xc002c704d0) (0xc000c69720) Stream added, broadcasting: 1 I0727 10:38:10.370649 7 log.go:172] (0xc002c704d0) Reply frame received for 1 I0727 10:38:10.370708 7 log.go:172] (0xc002c704d0) (0xc00096fb80) Create stream I0727 10:38:10.370764 7 log.go:172] (0xc002c704d0) (0xc00096fb80) Stream added, broadcasting: 3 I0727 10:38:10.372050 7 log.go:172] (0xc002c704d0) Reply frame received for 3 I0727 10:38:10.372098 7 log.go:172] (0xc002c704d0) (0xc000c69860) Create stream I0727 10:38:10.372122 7 log.go:172] (0xc002c704d0) (0xc000c69860) Stream added, broadcasting: 5 I0727 10:38:10.373325 7 log.go:172] (0xc002c704d0) Reply frame received for 5 I0727 10:38:10.434720 7 log.go:172] (0xc002c704d0) Data frame received for 3 I0727 10:38:10.434773 7 log.go:172] (0xc00096fb80) (3) Data frame handling I0727 10:38:10.434791 7 log.go:172] (0xc00096fb80) (3) Data frame sent I0727 10:38:10.434813 7 log.go:172] (0xc002c704d0) Data frame received for 3 I0727 10:38:10.434825 7 log.go:172] (0xc00096fb80) (3) Data frame handling I0727 10:38:10.434855 7 log.go:172] (0xc002c704d0) Data frame received for 5 I0727 10:38:10.434881 7 log.go:172] (0xc000c69860) (5) Data frame handling I0727 10:38:10.436055 7 log.go:172] (0xc002c704d0) Data frame received for 1 I0727 10:38:10.436080 7 log.go:172] (0xc000c69720) (1) Data frame handling I0727 10:38:10.436108 7 log.go:172] (0xc000c69720) (1) Data frame sent I0727 10:38:10.436124 7 log.go:172] (0xc002c704d0) (0xc000c69720) Stream removed, broadcasting: 1 I0727 10:38:10.436196 7 log.go:172] (0xc002c704d0) Go away received I0727 10:38:10.436235 7 log.go:172] (0xc002c704d0) (0xc000c69720) Stream removed, broadcasting: 1 I0727 10:38:10.436263 7 log.go:172] (0xc002c704d0) (0xc00096fb80) Stream removed, broadcasting: 3 I0727 10:38:10.436286 7 log.go:172] (0xc002c704d0) (0xc000c69860) Stream removed, broadcasting: 5 Jul 27 10:38:10.436: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jul 27 10:38:10.436: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1579 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 27 10:38:10.436: INFO: >>> kubeConfig: /root/.kube/config I0727 10:38:10.468181 7 log.go:172] (0xc002db2630) (0xc0004d50e0) Create stream I0727 10:38:10.468194 7 log.go:172] (0xc002db2630) (0xc0004d50e0) Stream added, broadcasting: 1 I0727 10:38:10.469745 7 log.go:172] (0xc002db2630) Reply frame received for 1 I0727 10:38:10.469794 7 log.go:172] (0xc002db2630) (0xc00044bb80) Create stream I0727 10:38:10.469818 7 log.go:172] (0xc002db2630) (0xc00044bb80) Stream added, broadcasting: 3 I0727 10:38:10.470654 7 log.go:172] (0xc002db2630) Reply frame received for 3 I0727 10:38:10.470690 7 log.go:172] (0xc002db2630) (0xc000252640) Create stream I0727 10:38:10.470704 7 log.go:172] (0xc002db2630) (0xc000252640) Stream added, broadcasting: 5 I0727 10:38:10.471800 7 log.go:172] (0xc002db2630) Reply frame received for 5 I0727 10:38:10.513824 7 log.go:172] (0xc002db2630) Data frame received for 5 I0727 10:38:10.513880 7 log.go:172] (0xc000252640) (5) Data frame handling I0727 10:38:10.513914 7 log.go:172] (0xc002db2630) Data frame received for 3 I0727 10:38:10.513950 7 log.go:172] (0xc00044bb80) (3) Data frame handling I0727 10:38:10.513972 7 log.go:172] (0xc00044bb80) (3) Data frame sent I0727 10:38:10.513988 7 log.go:172] (0xc002db2630) Data frame received for 3 I0727 10:38:10.514003 7 log.go:172] (0xc00044bb80) (3) Data frame handling I0727 10:38:10.515374 7 log.go:172] (0xc002db2630) Data frame received for 1 I0727 10:38:10.515403 7 log.go:172] (0xc0004d50e0) (1) Data frame handling I0727 10:38:10.515428 7 log.go:172] (0xc0004d50e0) (1) Data frame sent I0727 10:38:10.515445 7 log.go:172] (0xc002db2630) (0xc0004d50e0) Stream removed, broadcasting: 1 I0727 10:38:10.515536 7 log.go:172] (0xc002db2630) Go away received I0727 10:38:10.515587 7 log.go:172] (0xc002db2630) (0xc0004d50e0) Stream removed, broadcasting: 1 I0727 10:38:10.515621 7 log.go:172] (0xc002db2630) (0xc00044bb80) Stream removed, broadcasting: 3 I0727 10:38:10.515637 7 log.go:172] (0xc002db2630) (0xc000252640) Stream removed, broadcasting: 5 Jul 27 10:38:10.515: INFO: Exec stderr: "" Jul 27 10:38:10.515: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1579 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 27 10:38:10.515: INFO: >>> kubeConfig: /root/.kube/config I0727 10:38:10.545770 7 log.go:172] (0xc002db2bb0) (0xc0004d5860) Create stream I0727 10:38:10.545797 7 log.go:172] (0xc002db2bb0) (0xc0004d5860) Stream added, broadcasting: 1 I0727 10:38:10.548091 7 log.go:172] (0xc002db2bb0) Reply frame received for 1 I0727 10:38:10.548118 7 log.go:172] (0xc002db2bb0) (0xc0002cc3c0) Create stream I0727 10:38:10.548129 7 log.go:172] (0xc002db2bb0) (0xc0002cc3c0) Stream added, broadcasting: 3 I0727 10:38:10.549179 7 log.go:172] (0xc002db2bb0) Reply frame received for 3 I0727 10:38:10.549216 7 log.go:172] (0xc002db2bb0) (0xc0002cce60) Create stream I0727 10:38:10.549230 7 log.go:172] (0xc002db2bb0) (0xc0002cce60) Stream added, broadcasting: 5 I0727 10:38:10.550240 7 log.go:172] (0xc002db2bb0) Reply frame received for 5 I0727 10:38:10.614085 7 log.go:172] (0xc002db2bb0) Data frame received for 3 I0727 10:38:10.614110 7 log.go:172] (0xc0002cc3c0) (3) Data frame handling I0727 10:38:10.614118 7 log.go:172] (0xc0002cc3c0) (3) Data frame sent I0727 10:38:10.614124 7 log.go:172] (0xc002db2bb0) Data frame received for 3 I0727 10:38:10.614132 7 log.go:172] (0xc0002cc3c0) (3) Data frame handling I0727 10:38:10.614146 7 log.go:172] (0xc002db2bb0) Data frame received for 5 I0727 10:38:10.614152 7 log.go:172] (0xc0002cce60) (5) Data frame handling I0727 10:38:10.615161 7 log.go:172] (0xc002db2bb0) Data frame received for 1 I0727 10:38:10.615180 7 log.go:172] (0xc0004d5860) (1) Data frame handling I0727 10:38:10.615194 7 log.go:172] (0xc0004d5860) (1) Data frame sent I0727 10:38:10.615205 7 log.go:172] (0xc002db2bb0) (0xc0004d5860) Stream removed, broadcasting: 1 I0727 10:38:10.615218 7 log.go:172] (0xc002db2bb0) Go away received I0727 10:38:10.615383 7 log.go:172] (0xc002db2bb0) (0xc0004d5860) Stream removed, broadcasting: 1 I0727 10:38:10.615406 7 log.go:172] (0xc002db2bb0) (0xc0002cc3c0) Stream removed, broadcasting: 3 I0727 10:38:10.615419 7 log.go:172] (0xc002db2bb0) (0xc0002cce60) Stream removed, broadcasting: 5 Jul 27 10:38:10.615: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jul 27 10:38:10.615: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1579 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 27 10:38:10.615: INFO: >>> kubeConfig: /root/.kube/config I0727 10:38:10.645732 7 log.go:172] (0xc002db31e0) (0xc000185720) Create stream I0727 10:38:10.645754 7 log.go:172] (0xc002db31e0) (0xc000185720) Stream added, broadcasting: 1 I0727 10:38:10.648183 7 log.go:172] (0xc002db31e0) Reply frame received for 1 I0727 10:38:10.648219 7 log.go:172] (0xc002db31e0) (0xc000c699a0) Create stream I0727 10:38:10.648232 7 log.go:172] (0xc002db31e0) (0xc000c699a0) Stream added, broadcasting: 3 I0727 10:38:10.649222 7 log.go:172] (0xc002db31e0) Reply frame received for 3 I0727 10:38:10.649250 7 log.go:172] (0xc002db31e0) (0xc000185ea0) Create stream I0727 10:38:10.649261 7 log.go:172] (0xc002db31e0) (0xc000185ea0) Stream added, broadcasting: 5 I0727 10:38:10.650091 7 log.go:172] (0xc002db31e0) Reply frame received for 5 I0727 10:38:10.704600 7 log.go:172] (0xc002db31e0) Data frame received for 5 I0727 10:38:10.704661 7 log.go:172] (0xc000185ea0) (5) Data frame handling I0727 10:38:10.704695 7 log.go:172] (0xc002db31e0) Data frame received for 3 I0727 10:38:10.704715 7 log.go:172] (0xc000c699a0) (3) Data frame handling I0727 10:38:10.704864 7 log.go:172] (0xc000c699a0) (3) Data frame sent I0727 10:38:10.704904 7 log.go:172] (0xc002db31e0) Data frame received for 3 I0727 10:38:10.704925 7 log.go:172] (0xc000c699a0) (3) Data frame handling I0727 10:38:10.706130 7 log.go:172] (0xc002db31e0) Data frame received for 1 I0727 10:38:10.706156 7 log.go:172] (0xc000185720) (1) Data frame handling I0727 10:38:10.706184 7 log.go:172] (0xc000185720) (1) Data frame sent I0727 10:38:10.706205 7 log.go:172] (0xc002db31e0) (0xc000185720) Stream removed, broadcasting: 1 I0727 10:38:10.706279 7 log.go:172] (0xc002db31e0) Go away received I0727 10:38:10.706314 7 log.go:172] (0xc002db31e0) (0xc000185720) Stream removed, broadcasting: 1 I0727 10:38:10.706339 7 log.go:172] (0xc002db31e0) (0xc000c699a0) Stream removed, broadcasting: 3 I0727 10:38:10.706357 7 log.go:172] (0xc002db31e0) (0xc000185ea0) Stream removed, broadcasting: 5 Jul 27 10:38:10.706: INFO: Exec stderr: "" Jul 27 10:38:10.706: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1579 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 27 10:38:10.706: INFO: >>> kubeConfig: /root/.kube/config I0727 10:38:10.749565 7 log.go:172] (0xc002c70bb0) (0xc000c69d60) Create stream I0727 10:38:10.749595 7 log.go:172] (0xc002c70bb0) (0xc000c69d60) Stream added, broadcasting: 1 I0727 10:38:10.752711 7 log.go:172] (0xc002c70bb0) Reply frame received for 1 I0727 10:38:10.752858 7 log.go:172] (0xc002c70bb0) (0xc0000f9180) Create stream I0727 10:38:10.752875 7 log.go:172] (0xc002c70bb0) (0xc0000f9180) Stream added, broadcasting: 3 I0727 10:38:10.754057 7 log.go:172] (0xc002c70bb0) Reply frame received for 3 I0727 10:38:10.754088 7 log.go:172] (0xc002c70bb0) (0xc0002cd4a0) Create stream I0727 10:38:10.754099 7 log.go:172] (0xc002c70bb0) (0xc0002cd4a0) Stream added, broadcasting: 5 I0727 10:38:10.754855 7 log.go:172] (0xc002c70bb0) Reply frame received for 5 I0727 10:38:10.810580 7 log.go:172] (0xc002c70bb0) Data frame received for 5 I0727 10:38:10.810612 7 log.go:172] (0xc0002cd4a0) (5) Data frame handling I0727 10:38:10.810664 7 log.go:172] (0xc002c70bb0) Data frame received for 3 I0727 10:38:10.810702 7 log.go:172] (0xc0000f9180) (3) Data frame handling I0727 10:38:10.810718 7 log.go:172] (0xc0000f9180) (3) Data frame sent I0727 10:38:10.810734 7 log.go:172] (0xc002c70bb0) Data frame received for 3 I0727 10:38:10.810744 7 log.go:172] (0xc0000f9180) (3) Data frame handling I0727 10:38:10.811933 7 log.go:172] (0xc002c70bb0) Data frame received for 1 I0727 10:38:10.811965 7 log.go:172] (0xc000c69d60) (1) Data frame handling I0727 10:38:10.811988 7 log.go:172] (0xc000c69d60) (1) Data frame sent I0727 10:38:10.811999 7 log.go:172] (0xc002c70bb0) (0xc000c69d60) Stream removed, broadcasting: 1 I0727 10:38:10.812038 7 log.go:172] (0xc002c70bb0) Go away received I0727 10:38:10.812067 7 log.go:172] (0xc002c70bb0) (0xc000c69d60) Stream removed, broadcasting: 1 I0727 10:38:10.812077 7 log.go:172] (0xc002c70bb0) (0xc0000f9180) Stream removed, broadcasting: 3 I0727 10:38:10.812084 7 log.go:172] (0xc002c70bb0) (0xc0002cd4a0) Stream removed, broadcasting: 5 Jul 27 10:38:10.812: INFO: Exec stderr: "" Jul 27 10:38:10.812: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1579 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 27 10:38:10.812: INFO: >>> kubeConfig: /root/.kube/config I0727 10:38:10.861435 7 log.go:172] (0xc002c711e0) (0xc000ba2aa0) Create stream I0727 10:38:10.861456 7 log.go:172] (0xc002c711e0) (0xc000ba2aa0) Stream added, broadcasting: 1 I0727 10:38:10.863398 7 log.go:172] (0xc002c711e0) Reply frame received for 1 I0727 10:38:10.863440 7 log.go:172] (0xc002c711e0) (0xc000b44000) Create stream I0727 10:38:10.863455 7 log.go:172] (0xc002c711e0) (0xc000b44000) Stream added, broadcasting: 3 I0727 10:38:10.864267 7 log.go:172] (0xc002c711e0) Reply frame received for 3 I0727 10:38:10.864310 7 log.go:172] (0xc002c711e0) (0xc0002cd860) Create stream I0727 10:38:10.864325 7 log.go:172] (0xc002c711e0) (0xc0002cd860) Stream added, broadcasting: 5 I0727 10:38:10.865374 7 log.go:172] (0xc002c711e0) Reply frame received for 5 I0727 10:38:10.917912 7 log.go:172] (0xc002c711e0) Data frame received for 3 I0727 10:38:10.917930 7 log.go:172] (0xc000b44000) (3) Data frame handling I0727 10:38:10.917946 7 log.go:172] (0xc000b44000) (3) Data frame sent I0727 10:38:10.917954 7 log.go:172] (0xc002c711e0) Data frame received for 3 I0727 10:38:10.917961 7 log.go:172] (0xc000b44000) (3) Data frame handling I0727 10:38:10.917977 7 log.go:172] (0xc002c711e0) Data frame received for 5 I0727 10:38:10.917987 7 log.go:172] (0xc0002cd860) (5) Data frame handling I0727 10:38:10.919240 7 log.go:172] (0xc002c711e0) Data frame received for 1 I0727 10:38:10.919252 7 log.go:172] (0xc000ba2aa0) (1) Data frame handling I0727 10:38:10.919261 7 log.go:172] (0xc000ba2aa0) (1) Data frame sent I0727 10:38:10.919304 7 log.go:172] (0xc002c711e0) (0xc000ba2aa0) Stream removed, broadcasting: 1 I0727 10:38:10.919384 7 log.go:172] (0xc002c711e0) (0xc000ba2aa0) Stream removed, broadcasting: 1 I0727 10:38:10.919395 7 log.go:172] (0xc002c711e0) (0xc000b44000) Stream removed, broadcasting: 3 I0727 10:38:10.919487 7 log.go:172] (0xc002c711e0) (0xc0002cd860) Stream removed, broadcasting: 5 Jul 27 10:38:10.919: INFO: Exec stderr: "" Jul 27 10:38:10.919: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1579 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 27 10:38:10.919: INFO: >>> kubeConfig: /root/.kube/config I0727 10:38:10.919598 7 log.go:172] (0xc002c711e0) Go away received I0727 10:38:10.947364 7 log.go:172] (0xc002d66420) (0xc000c665a0) Create stream I0727 10:38:10.947384 7 log.go:172] (0xc002d66420) (0xc000c665a0) Stream added, broadcasting: 1 I0727 10:38:10.950210 7 log.go:172] (0xc002d66420) Reply frame received for 1 I0727 10:38:10.950262 7 log.go:172] (0xc002d66420) (0xc000ba2e60) Create stream I0727 10:38:10.950279 7 log.go:172] (0xc002d66420) (0xc000ba2e60) Stream added, broadcasting: 3 I0727 10:38:10.951186 7 log.go:172] (0xc002d66420) Reply frame received for 3 I0727 10:38:10.951213 7 log.go:172] (0xc002d66420) (0xc000ba30e0) Create stream I0727 10:38:10.951227 7 log.go:172] (0xc002d66420) (0xc000ba30e0) Stream added, broadcasting: 5 I0727 10:38:10.952130 7 log.go:172] (0xc002d66420) Reply frame received for 5 I0727 10:38:11.008027 7 log.go:172] (0xc002d66420) Data frame received for 5 I0727 10:38:11.008067 7 log.go:172] (0xc000ba30e0) (5) Data frame handling I0727 10:38:11.008086 7 log.go:172] (0xc002d66420) Data frame received for 3 I0727 10:38:11.008104 7 log.go:172] (0xc000ba2e60) (3) Data frame handling I0727 10:38:11.008122 7 log.go:172] (0xc000ba2e60) (3) Data frame sent I0727 10:38:11.008127 7 log.go:172] (0xc002d66420) Data frame received for 3 I0727 10:38:11.008132 7 log.go:172] (0xc000ba2e60) (3) Data frame handling I0727 10:38:11.009124 7 log.go:172] (0xc002d66420) Data frame received for 1 I0727 10:38:11.009143 7 log.go:172] (0xc000c665a0) (1) Data frame handling I0727 10:38:11.009159 7 log.go:172] (0xc000c665a0) (1) Data frame sent I0727 10:38:11.009178 7 log.go:172] (0xc002d66420) (0xc000c665a0) Stream removed, broadcasting: 1 I0727 10:38:11.009195 7 log.go:172] (0xc002d66420) Go away received I0727 10:38:11.009299 7 log.go:172] (0xc002d66420) (0xc000c665a0) Stream removed, broadcasting: 1 I0727 10:38:11.009312 7 log.go:172] (0xc002d66420) (0xc000ba2e60) Stream removed, broadcasting: 3 I0727 10:38:11.009317 7 log.go:172] (0xc002d66420) (0xc000ba30e0) Stream removed, broadcasting: 5 Jul 27 10:38:11.009: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:38:11.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1579" for this suite. • [SLOW TEST:13.409 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":225,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:38:11.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 27 10:38:11.101: INFO: Waiting up to 5m0s for pod "pod-9c1f1785-e85d-4f51-83bc-ac43a2f3a166" in namespace "emptydir-2719" to be "Succeeded or Failed" Jul 27 10:38:11.115: INFO: Pod "pod-9c1f1785-e85d-4f51-83bc-ac43a2f3a166": Phase="Pending", Reason="", readiness=false. Elapsed: 13.981363ms Jul 27 10:38:13.321: INFO: Pod "pod-9c1f1785-e85d-4f51-83bc-ac43a2f3a166": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219113735s Jul 27 10:38:16.166: INFO: Pod "pod-9c1f1785-e85d-4f51-83bc-ac43a2f3a166": Phase="Succeeded", Reason="", readiness=false. Elapsed: 5.06438526s STEP: Saw pod success Jul 27 10:38:16.166: INFO: Pod "pod-9c1f1785-e85d-4f51-83bc-ac43a2f3a166" satisfied condition "Succeeded or Failed" Jul 27 10:38:16.169: INFO: Trying to get logs from node kali-worker pod pod-9c1f1785-e85d-4f51-83bc-ac43a2f3a166 container test-container: STEP: delete the pod Jul 27 10:38:16.553: INFO: Waiting for pod pod-9c1f1785-e85d-4f51-83bc-ac43a2f3a166 to disappear Jul 27 10:38:16.566: INFO: Pod pod-9c1f1785-e85d-4f51-83bc-ac43a2f3a166 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:38:16.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2719" for this suite. • [SLOW TEST:5.558 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":241,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:38:16.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 10:38:16.651: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jul 27 10:38:19.631: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4938 create -f -' Jul 27 10:38:23.250: INFO: stderr: "" Jul 27 10:38:23.250: INFO: stdout: "e2e-test-crd-publish-openapi-7193-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jul 27 10:38:23.250: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4938 delete e2e-test-crd-publish-openapi-7193-crds test-cr' Jul 27 10:38:23.357: INFO: stderr: "" Jul 27 10:38:23.357: INFO: stdout: "e2e-test-crd-publish-openapi-7193-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Jul 27 10:38:23.357: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4938 apply -f -' Jul 27 10:38:23.604: INFO: stderr: "" Jul 27 10:38:23.604: INFO: stdout: "e2e-test-crd-publish-openapi-7193-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Jul 27 10:38:23.604: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4938 delete e2e-test-crd-publish-openapi-7193-crds test-cr' Jul 27 10:38:23.730: INFO: stderr: "" Jul 27 10:38:23.730: INFO: stdout: "e2e-test-crd-publish-openapi-7193-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Jul 27 10:38:23.730: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7193-crds' Jul 27 10:38:24.067: INFO: stderr: "" Jul 27 10:38:24.067: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7193-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:38:27.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4938" for this suite. • [SLOW TEST:10.453 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":13,"skipped":255,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:38:27.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 27 10:38:27.437: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 27 10:38:29.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443107, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443107, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443107, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443107, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 10:38:31.609: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443107, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443107, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443107, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443107, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 10:38:33.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443107, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443107, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443107, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443107, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 27 10:38:36.566: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 10:38:36.570: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:38:37.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3448" for this suite. STEP: Destroying namespace "webhook-3448-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.875 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":14,"skipped":267,"failed":0} SSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:38:37.904: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 10:38:38.232: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jul 27 10:38:43.279: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 27 10:38:43.279: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Jul 27 10:38:43.333: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-9346 /apis/apps/v1/namespaces/deployment-9346/deployments/test-cleanup-deployment e2a1503e-1f6c-442a-b7e3-c00cf5db7138 4545540 1 2020-07-27 10:38:43 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-07-27 10:38:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002a22258 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Jul 27 10:38:43.360: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f deployment-9346 /apis/apps/v1/namespaces/deployment-9346/replicasets/test-cleanup-deployment-b4867b47f f934e4dc-a26b-4621-9988-b5abdff1f009 4545542 1 2020-07-27 10:38:43 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment e2a1503e-1f6c-442a-b7e3-c00cf5db7138 0xc002aa62e0 0xc002aa62e1}] [] [{kube-controller-manager Update apps/v1 2020-07-27 10:38:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 50 97 49 53 48 51 101 45 49 102 54 99 45 52 52 50 97 45 98 55 101 51 45 99 48 48 99 102 53 100 98 55 49 51 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002aa6358 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 27 10:38:43.360: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jul 27 10:38:43.360: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-9346 /apis/apps/v1/namespaces/deployment-9346/replicasets/test-cleanup-controller d00d2caf-c974-460a-9fba-255d08d7bc71 4545541 1 2020-07-27 10:38:38 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment e2a1503e-1f6c-442a-b7e3-c00cf5db7138 0xc002aa60a7 0xc002aa60a8}] [] [{e2e.test Update apps/v1 2020-07-27 10:38:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-27 10:38:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 50 97 49 53 48 51 101 45 49 102 54 99 45 52 52 50 97 45 98 55 101 51 45 99 48 48 99 102 53 100 98 55 49 51 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002aa6278 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 27 10:38:43.376: INFO: Pod "test-cleanup-controller-hnn55" is available: &Pod{ObjectMeta:{test-cleanup-controller-hnn55 test-cleanup-controller- deployment-9346 /api/v1/namespaces/deployment-9346/pods/test-cleanup-controller-hnn55 48991bf5-50a9-438b-8f10-f1a881963b74 4545517 0 2020-07-27 10:38:38 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller d00d2caf-c974-460a-9fba-255d08d7bc71 0xc002aa6817 0xc002aa6818}] [] [{kube-controller-manager Update v1 2020-07-27 10:38:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 48 48 100 50 99 97 102 45 99 57 55 52 45 52 54 48 97 45 57 102 98 97 45 50 53 53 100 48 56 100 55 98 99 55 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 10:38:41 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wt4m4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wt4m4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wt4m4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 10:38:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 10:38:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 10:38:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 10:38:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.101,StartTime:2020-07-27 10:38:38 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-27 10:38:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bed441fee68ca77904ee734c83d12d43c81c3c352a6f1a95ea8d68e260d96f53,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.101,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Jul 27 10:38:43.376: INFO: Pod "test-cleanup-deployment-b4867b47f-687bm" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-687bm test-cleanup-deployment-b4867b47f- deployment-9346 /api/v1/namespaces/deployment-9346/pods/test-cleanup-deployment-b4867b47f-687bm d102e229-7dc9-43ef-9cd0-14327b60edf5 4545547 0 2020-07-27 10:38:43 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f f934e4dc-a26b-4621-9988-b5abdff1f009 0xc002aa69d0 0xc002aa69d1}] [] [{kube-controller-manager Update v1 2020-07-27 10:38:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 57 51 52 101 52 100 99 45 97 50 54 98 45 52 54 50 49 45 57 57 56 56 45 98 53 97 98 100 102 102 49 102 48 48 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wt4m4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wt4m4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wt4m4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 10:38:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:38:43.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9346" for this suite. • [SLOW TEST:5.566 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":15,"skipped":272,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:38:43.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:38:50.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4022" for this suite. • [SLOW TEST:7.517 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":16,"skipped":341,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:38:50.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:38:57.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-3174" for this suite. • [SLOW TEST:6.725 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":17,"skipped":358,"failed":0} SSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:38:57.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-ceaf3eb6-c495-47a2-85e8-11ee024cd1d4 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-ceaf3eb6-c495-47a2-85e8-11ee024cd1d4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:40:32.496: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3640" for this suite. • [SLOW TEST:94.789 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":18,"skipped":363,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:40:32.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a secret [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a secret STEP: listing secrets in all namespaces to ensure that there are more than zero STEP: patching the secret STEP: deleting the secret using a LabelSelector STEP: listing secrets in all namespaces, searching for label name and value in patch [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:40:32.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5401" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":19,"skipped":370,"failed":0} ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:40:32.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-730 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-730 STEP: Deleting pre-stop pod Jul 27 10:40:51.791: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:40:51.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-730" for this suite. • [SLOW TEST:19.227 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":20,"skipped":370,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:40:51.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:41:08.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2820" for this suite. • [SLOW TEST:17.108 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":21,"skipped":391,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:41:09.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jul 27 10:41:14.125: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:41:14.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8246" for this suite. • [SLOW TEST:5.191 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":22,"skipped":424,"failed":0} SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:41:14.191: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 27 10:41:14.245: INFO: Waiting up to 5m0s for pod "downwardapi-volume-77a7c420-2bf1-4347-a05f-1d8cbe16da25" in namespace "projected-864" to be "Succeeded or Failed" Jul 27 10:41:14.297: INFO: Pod "downwardapi-volume-77a7c420-2bf1-4347-a05f-1d8cbe16da25": Phase="Pending", Reason="", readiness=false. Elapsed: 52.309964ms Jul 27 10:41:16.300: INFO: Pod "downwardapi-volume-77a7c420-2bf1-4347-a05f-1d8cbe16da25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05573084s Jul 27 10:41:18.303: INFO: Pod "downwardapi-volume-77a7c420-2bf1-4347-a05f-1d8cbe16da25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05854121s STEP: Saw pod success Jul 27 10:41:18.303: INFO: Pod "downwardapi-volume-77a7c420-2bf1-4347-a05f-1d8cbe16da25" satisfied condition "Succeeded or Failed" Jul 27 10:41:18.306: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-77a7c420-2bf1-4347-a05f-1d8cbe16da25 container client-container: STEP: delete the pod Jul 27 10:41:18.435: INFO: Waiting for pod downwardapi-volume-77a7c420-2bf1-4347-a05f-1d8cbe16da25 to disappear Jul 27 10:41:18.452: INFO: Pod downwardapi-volume-77a7c420-2bf1-4347-a05f-1d8cbe16da25 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:41:18.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-864" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":429,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:41:18.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 27 10:41:18.539: INFO: Waiting up to 5m0s for pod "downwardapi-volume-eda5b637-c13d-4b57-bdde-8c09b4629d52" in namespace "projected-6425" to be "Succeeded or Failed" Jul 27 10:41:18.583: INFO: Pod "downwardapi-volume-eda5b637-c13d-4b57-bdde-8c09b4629d52": Phase="Pending", Reason="", readiness=false. Elapsed: 44.284486ms Jul 27 10:41:20.654: INFO: Pod "downwardapi-volume-eda5b637-c13d-4b57-bdde-8c09b4629d52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114902417s Jul 27 10:41:22.658: INFO: Pod "downwardapi-volume-eda5b637-c13d-4b57-bdde-8c09b4629d52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.118631613s STEP: Saw pod success Jul 27 10:41:22.658: INFO: Pod "downwardapi-volume-eda5b637-c13d-4b57-bdde-8c09b4629d52" satisfied condition "Succeeded or Failed" Jul 27 10:41:22.660: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-eda5b637-c13d-4b57-bdde-8c09b4629d52 container client-container: STEP: delete the pod Jul 27 10:41:22.791: INFO: Waiting for pod downwardapi-volume-eda5b637-c13d-4b57-bdde-8c09b4629d52 to disappear Jul 27 10:41:22.817: INFO: Pod downwardapi-volume-eda5b637-c13d-4b57-bdde-8c09b4629d52 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:41:22.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6425" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":24,"skipped":493,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:41:22.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 10:41:23.007: INFO: Waiting up to 5m0s for pod "busybox-user-65534-d2d88867-2837-4e22-a37d-e47b659ee95e" in namespace "security-context-test-1192" to be "Succeeded or Failed" Jul 27 10:41:23.020: INFO: Pod "busybox-user-65534-d2d88867-2837-4e22-a37d-e47b659ee95e": Phase="Pending", Reason="", readiness=false. Elapsed: 13.773474ms Jul 27 10:41:25.023: INFO: Pod "busybox-user-65534-d2d88867-2837-4e22-a37d-e47b659ee95e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01665974s Jul 27 10:41:27.027: INFO: Pod "busybox-user-65534-d2d88867-2837-4e22-a37d-e47b659ee95e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019816027s Jul 27 10:41:27.027: INFO: Pod "busybox-user-65534-d2d88867-2837-4e22-a37d-e47b659ee95e" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:41:27.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1192" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":25,"skipped":494,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:41:27.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 10:41:27.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1623' Jul 27 10:41:27.469: INFO: stderr: "" Jul 27 10:41:27.469: INFO: stdout: "replicationcontroller/agnhost-master created\n" Jul 27 10:41:27.469: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1623' Jul 27 10:41:27.754: INFO: stderr: "" Jul 27 10:41:27.754: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jul 27 10:41:28.758: INFO: Selector matched 1 pods for map[app:agnhost] Jul 27 10:41:28.758: INFO: Found 0 / 1 Jul 27 10:41:29.758: INFO: Selector matched 1 pods for map[app:agnhost] Jul 27 10:41:29.758: INFO: Found 0 / 1 Jul 27 10:41:30.759: INFO: Selector matched 1 pods for map[app:agnhost] Jul 27 10:41:30.759: INFO: Found 1 / 1 Jul 27 10:41:30.759: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 27 10:41:30.762: INFO: Selector matched 1 pods for map[app:agnhost] Jul 27 10:41:30.762: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 27 10:41:30.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe pod agnhost-master-hw6b4 --namespace=kubectl-1623' Jul 27 10:41:30.870: INFO: stderr: "" Jul 27 10:41:30.870: INFO: stdout: "Name: agnhost-master-hw6b4\nNamespace: kubectl-1623\nPriority: 0\nNode: kali-worker2/172.18.0.15\nStart Time: Mon, 27 Jul 2020 10:41:27 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.201\nIPs:\n IP: 10.244.1.201\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://07b23eb6a98e94f707598268bad78dba00e2ae4542f261e0659b312c9ea11342\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 27 Jul 2020 10:41:30 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-mgmhx (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-mgmhx:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-mgmhx\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-1623/agnhost-master-hw6b4 to kali-worker2\n Normal Pulled 2s kubelet, kali-worker2 Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 0s kubelet, kali-worker2 Created container agnhost-master\n Normal Started 0s kubelet, kali-worker2 Started container agnhost-master\n" Jul 27 10:41:30.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-1623' Jul 27 10:41:30.977: INFO: stderr: "" Jul 27 10:41:30.977: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1623\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-hw6b4\n" Jul 27 10:41:30.977: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-1623' Jul 27 10:41:31.072: INFO: stderr: "" Jul 27 10:41:31.072: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-1623\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.97.171.192\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.201:6379\nSession Affinity: None\nEvents: \n" Jul 27 10:41:31.075: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe node kali-control-plane' Jul 27 10:41:31.184: INFO: stderr: "" Jul 27 10:41:31.184: INFO: stdout: "Name: kali-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kali-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 10 Jul 2020 10:27:46 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kali-control-plane\n AcquireTime: \n RenewTime: Mon, 27 Jul 2020 10:41:24 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Mon, 27 Jul 2020 10:37:35 +0000 Fri, 10 Jul 2020 10:27:45 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 27 Jul 2020 10:37:35 +0000 Fri, 10 Jul 2020 10:27:45 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 27 Jul 2020 10:37:35 +0000 Fri, 10 Jul 2020 10:27:45 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 27 Jul 2020 10:37:35 +0000 Fri, 10 Jul 2020 10:28:23 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.16\n Hostname: kali-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: d83d42c4b42d4de1b3233683d9cadf95\n System UUID: e06c57c7-ce4f-4ae9-8bb6-40f1dc0e1a64\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-34-g49b0743c\n Kubelet Version: v1.18.4\n Kube-Proxy Version: v1.18.4\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-qtcqs 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 17d\n kube-system coredns-66bff467f8-tjkg9 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 17d\n kube-system etcd-kali-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17d\n kube-system kindnet-zxw2f 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 17d\n kube-system kube-apiserver-kali-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 17d\n kube-system kube-controller-manager-kali-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 17d\n kube-system kube-proxy-xmqbs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17d\n kube-system kube-scheduler-kali-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 17d\n local-path-storage local-path-provisioner-67795f75bd-clsb6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 17d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Jul 27 10:41:31.184: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe namespace kubectl-1623' Jul 27 10:41:31.289: INFO: stderr: "" Jul 27 10:41:31.289: INFO: stdout: "Name: kubectl-1623\nLabels: e2e-framework=kubectl\n e2e-run=f4269bb3-2b14-484b-968a-e6796a7b9759\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:41:31.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1623" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":26,"skipped":501,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:41:31.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-2edd90b0-0d60-4990-bba0-9bdc2ecaa0d5 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:41:37.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8824" for this suite. • [SLOW TEST:6.107 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":519,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:41:37.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap that has name configmap-test-emptyKey-c5459de7-93c4-4a53-a1e9-96169ed4109a [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:41:37.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5876" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":28,"skipped":530,"failed":0} ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:41:37.637: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:41:57.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9582" for this suite. • [SLOW TEST:20.115 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":29,"skipped":530,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:41:57.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 27 10:41:57.928: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b19b7c23-d733-4d94-b9b5-201105c0611e" in namespace "downward-api-983" to be "Succeeded or Failed" Jul 27 10:41:57.975: INFO: Pod "downwardapi-volume-b19b7c23-d733-4d94-b9b5-201105c0611e": Phase="Pending", Reason="", readiness=false. Elapsed: 46.430005ms Jul 27 10:42:00.041: INFO: Pod "downwardapi-volume-b19b7c23-d733-4d94-b9b5-201105c0611e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.112233447s Jul 27 10:42:02.045: INFO: Pod "downwardapi-volume-b19b7c23-d733-4d94-b9b5-201105c0611e": Phase="Running", Reason="", readiness=true. Elapsed: 4.116331475s Jul 27 10:42:04.049: INFO: Pod "downwardapi-volume-b19b7c23-d733-4d94-b9b5-201105c0611e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.120631139s STEP: Saw pod success Jul 27 10:42:04.049: INFO: Pod "downwardapi-volume-b19b7c23-d733-4d94-b9b5-201105c0611e" satisfied condition "Succeeded or Failed" Jul 27 10:42:04.052: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-b19b7c23-d733-4d94-b9b5-201105c0611e container client-container: STEP: delete the pod Jul 27 10:42:04.204: INFO: Waiting for pod downwardapi-volume-b19b7c23-d733-4d94-b9b5-201105c0611e to disappear Jul 27 10:42:04.244: INFO: Pod downwardapi-volume-b19b7c23-d733-4d94-b9b5-201105c0611e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:42:04.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-983" for this suite. • [SLOW TEST:6.500 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":30,"skipped":533,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:42:04.253: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Jul 27 10:42:09.053: INFO: Successfully updated pod "annotationupdatedccc525d-a14e-4f8b-8b25-253edbbeccf0" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:42:11.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4209" for this suite. • [SLOW TEST:6.836 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":31,"skipped":562,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:42:11.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 27 10:42:11.187: INFO: Waiting up to 5m0s for pod "pod-8be45c16-c6d3-472f-9c2c-eef95cda95dc" in namespace "emptydir-9152" to be "Succeeded or Failed" Jul 27 10:42:11.196: INFO: Pod "pod-8be45c16-c6d3-472f-9c2c-eef95cda95dc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.741661ms Jul 27 10:42:13.227: INFO: Pod "pod-8be45c16-c6d3-472f-9c2c-eef95cda95dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039877404s Jul 27 10:42:15.674: INFO: Pod "pod-8be45c16-c6d3-472f-9c2c-eef95cda95dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.487352411s Jul 27 10:42:17.678: INFO: Pod "pod-8be45c16-c6d3-472f-9c2c-eef95cda95dc": Phase="Running", Reason="", readiness=true. Elapsed: 6.491097272s Jul 27 10:42:19.832: INFO: Pod "pod-8be45c16-c6d3-472f-9c2c-eef95cda95dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.645187954s STEP: Saw pod success Jul 27 10:42:19.832: INFO: Pod "pod-8be45c16-c6d3-472f-9c2c-eef95cda95dc" satisfied condition "Succeeded or Failed" Jul 27 10:42:19.835: INFO: Trying to get logs from node kali-worker pod pod-8be45c16-c6d3-472f-9c2c-eef95cda95dc container test-container: STEP: delete the pod Jul 27 10:42:20.000: INFO: Waiting for pod pod-8be45c16-c6d3-472f-9c2c-eef95cda95dc to disappear Jul 27 10:42:20.004: INFO: Pod pod-8be45c16-c6d3-472f-9c2c-eef95cda95dc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:42:20.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9152" for this suite. • [SLOW TEST:8.921 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":32,"skipped":572,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:42:20.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 27 10:42:20.649: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 27 10:42:24.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443340, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443340, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443340, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443340, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 10:42:26.222: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443340, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443340, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443340, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443340, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 27 10:42:30.154: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:42:30.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9562" for this suite. STEP: Destroying namespace "webhook-9562-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.549 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":33,"skipped":576,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:42:30.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 10:42:30.657: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:42:38.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3631" for this suite. • [SLOW TEST:7.760 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":275,"completed":34,"skipped":589,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:42:38.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting the proxy server Jul 27 10:42:38.363: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:42:38.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3338" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":275,"completed":35,"skipped":605,"failed":0} SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:42:38.437: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Jul 27 10:42:38.476: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 27 10:42:38.523: INFO: Waiting for terminating namespaces to be deleted... Jul 27 10:42:38.526: INFO: Logging pods the kubelet thinks is on node kali-worker before test Jul 27 10:42:38.530: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Jul 27 10:42:38.530: INFO: Container kindnet-cni ready: true, restart count 1 Jul 27 10:42:38.530: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Jul 27 10:42:38.530: INFO: Container kube-proxy ready: true, restart count 0 Jul 27 10:42:38.530: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Jul 27 10:42:38.535: INFO: rally-e16cb124-n7jrij5e from c-rally-e16cb124-7kaqlbw8 started at 2020-07-27 10:42:24 +0000 UTC (1 container statuses recorded) Jul 27 10:42:38.536: INFO: Container rally-e16cb124-n7jrij5e ready: true, restart count 0 Jul 27 10:42:38.536: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Jul 27 10:42:38.536: INFO: Container kindnet-cni ready: true, restart count 1 Jul 27 10:42:38.536: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Jul 27 10:42:38.536: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.162595fd66da3a73], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:42:39.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5452" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":275,"completed":36,"skipped":616,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:42:39.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 27 10:42:40.212: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created Jul 27 10:42:42.545: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443360, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443360, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443360, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443360, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 10:42:44.948: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443360, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443360, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443360, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443360, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 10:42:46.621: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443360, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443360, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443360, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443360, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 27 10:42:49.595: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 10:42:49.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1476-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:42:50.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2144" for this suite. STEP: Destroying namespace "webhook-2144-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:11.368 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":37,"skipped":629,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:42:50.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 10:42:51.054: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:42:52.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9071" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":275,"completed":38,"skipped":630,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:42:53.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Jul 27 10:43:13.289: INFO: 5 pods remaining Jul 27 10:43:13.289: INFO: 5 pods has nil DeletionTimestamp Jul 27 10:43:13.289: INFO: STEP: Gathering metrics W0727 10:43:17.507114 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 27 10:43:17.507: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:43:17.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7029" for this suite. • [SLOW TEST:24.117 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":39,"skipped":636,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:43:17.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0727 10:43:18.693666 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 27 10:43:18.693: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:43:18.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2598" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":40,"skipped":726,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:43:18.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 27 10:43:18.873: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c710ad3e-3415-4eca-8e93-574ff68e16c9" in namespace "projected-2085" to be "Succeeded or Failed" Jul 27 10:43:18.918: INFO: Pod "downwardapi-volume-c710ad3e-3415-4eca-8e93-574ff68e16c9": Phase="Pending", Reason="", readiness=false. Elapsed: 44.654657ms Jul 27 10:43:20.922: INFO: Pod "downwardapi-volume-c710ad3e-3415-4eca-8e93-574ff68e16c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048297017s Jul 27 10:43:23.181: INFO: Pod "downwardapi-volume-c710ad3e-3415-4eca-8e93-574ff68e16c9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.307887161s Jul 27 10:43:25.215: INFO: Pod "downwardapi-volume-c710ad3e-3415-4eca-8e93-574ff68e16c9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.341476103s Jul 27 10:43:27.276: INFO: Pod "downwardapi-volume-c710ad3e-3415-4eca-8e93-574ff68e16c9": Phase="Running", Reason="", readiness=true. Elapsed: 8.402216901s Jul 27 10:43:29.292: INFO: Pod "downwardapi-volume-c710ad3e-3415-4eca-8e93-574ff68e16c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.418861968s STEP: Saw pod success Jul 27 10:43:29.292: INFO: Pod "downwardapi-volume-c710ad3e-3415-4eca-8e93-574ff68e16c9" satisfied condition "Succeeded or Failed" Jul 27 10:43:29.301: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-c710ad3e-3415-4eca-8e93-574ff68e16c9 container client-container: STEP: delete the pod Jul 27 10:43:29.731: INFO: Waiting for pod downwardapi-volume-c710ad3e-3415-4eca-8e93-574ff68e16c9 to disappear Jul 27 10:43:29.738: INFO: Pod downwardapi-volume-c710ad3e-3415-4eca-8e93-574ff68e16c9 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:43:29.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2085" for this suite. • [SLOW TEST:11.050 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":842,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:43:29.751: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jul 27 10:43:37.247: INFO: 6 pods remaining Jul 27 10:43:37.247: INFO: 0 pods has nil DeletionTimestamp Jul 27 10:43:37.247: INFO: Jul 27 10:43:37.883: INFO: 0 pods remaining Jul 27 10:43:37.883: INFO: 0 pods has nil DeletionTimestamp Jul 27 10:43:37.883: INFO: Jul 27 10:43:39.191: INFO: 0 pods remaining Jul 27 10:43:39.191: INFO: 0 pods has nil DeletionTimestamp Jul 27 10:43:39.191: INFO: STEP: Gathering metrics W0727 10:43:39.666625 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jul 27 10:43:39.666: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:43:39.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6288" for this suite. • [SLOW TEST:10.900 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":42,"skipped":848,"failed":0} SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:43:40.652: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:44:41.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1812" for this suite. • [SLOW TEST:61.087 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":43,"skipped":850,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:44:41.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Jul 27 10:44:42.534: INFO: namespace kubectl-8946 Jul 27 10:44:42.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8946' Jul 27 10:44:44.941: INFO: stderr: "" Jul 27 10:44:44.941: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Jul 27 10:44:45.945: INFO: Selector matched 1 pods for map[app:agnhost] Jul 27 10:44:45.945: INFO: Found 0 / 1 Jul 27 10:44:46.994: INFO: Selector matched 1 pods for map[app:agnhost] Jul 27 10:44:46.994: INFO: Found 0 / 1 Jul 27 10:44:47.958: INFO: Selector matched 1 pods for map[app:agnhost] Jul 27 10:44:47.958: INFO: Found 0 / 1 Jul 27 10:44:49.131: INFO: Selector matched 1 pods for map[app:agnhost] Jul 27 10:44:49.131: INFO: Found 0 / 1 Jul 27 10:44:50.042: INFO: Selector matched 1 pods for map[app:agnhost] Jul 27 10:44:50.042: INFO: Found 0 / 1 Jul 27 10:44:51.221: INFO: Selector matched 1 pods for map[app:agnhost] Jul 27 10:44:51.221: INFO: Found 0 / 1 Jul 27 10:44:51.944: INFO: Selector matched 1 pods for map[app:agnhost] Jul 27 10:44:51.944: INFO: Found 0 / 1 Jul 27 10:44:52.951: INFO: Selector matched 1 pods for map[app:agnhost] Jul 27 10:44:52.951: INFO: Found 0 / 1 Jul 27 10:44:53.944: INFO: Selector matched 1 pods for map[app:agnhost] Jul 27 10:44:53.944: INFO: Found 1 / 1 Jul 27 10:44:53.944: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jul 27 10:44:53.947: INFO: Selector matched 1 pods for map[app:agnhost] Jul 27 10:44:53.948: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jul 27 10:44:53.948: INFO: wait on agnhost-master startup in kubectl-8946 Jul 27 10:44:53.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs agnhost-master-xj7j6 agnhost-master --namespace=kubectl-8946' Jul 27 10:44:54.065: INFO: stderr: "" Jul 27 10:44:54.065: INFO: stdout: "Paused\n" STEP: exposing RC Jul 27 10:44:54.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8946' Jul 27 10:44:54.193: INFO: stderr: "" Jul 27 10:44:54.193: INFO: stdout: "service/rm2 exposed\n" Jul 27 10:44:54.197: INFO: Service rm2 in namespace kubectl-8946 found. STEP: exposing service Jul 27 10:44:56.202: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8946' Jul 27 10:44:56.319: INFO: stderr: "" Jul 27 10:44:56.319: INFO: stdout: "service/rm3 exposed\n" Jul 27 10:44:56.324: INFO: Service rm3 in namespace kubectl-8946 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:44:58.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8946" for this suite. • [SLOW TEST:16.595 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":275,"completed":44,"skipped":853,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:44:58.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jul 27 10:44:58.468: INFO: Pod name pod-release: Found 0 pods out of 1 Jul 27 10:45:03.470: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:45:04.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5125" for this suite. • [SLOW TEST:6.205 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":45,"skipped":888,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:45:04.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 10:45:05.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jul 27 10:45:05.809: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-27T10:45:05Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-27T10:45:05Z]] name:name1 resourceVersion:4548190 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:93f77d3d-6b1d-4ef8-b6b4-529dec5590cb] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jul 27 10:45:15.815: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-27T10:45:15Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-27T10:45:15Z]] name:name2 resourceVersion:4548264 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f3f4bdbb-dd11-4ddd-ae3f-a165d115686c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jul 27 10:45:26.120: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-27T10:45:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-27T10:45:25Z]] name:name1 resourceVersion:4548329 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:93f77d3d-6b1d-4ef8-b6b4-529dec5590cb] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jul 27 10:45:36.128: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-27T10:45:15Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-27T10:45:36Z]] name:name2 resourceVersion:4548393 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f3f4bdbb-dd11-4ddd-ae3f-a165d115686c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jul 27 10:45:46.135: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-27T10:45:05Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-27T10:45:25Z]] name:name1 resourceVersion:4548447 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:93f77d3d-6b1d-4ef8-b6b4-529dec5590cb] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jul 27 10:45:56.274: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-07-27T10:45:15Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-07-27T10:45:36Z]] name:name2 resourceVersion:4548520 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:f3f4bdbb-dd11-4ddd-ae3f-a165d115686c] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:46:06.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-7751" for this suite. • [SLOW TEST:62.270 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":46,"skipped":890,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:46:06.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 10:46:07.313: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-c3e6806c-ee96-434b-a880-c525d3db4a20" in namespace "security-context-test-1640" to be "Succeeded or Failed" Jul 27 10:46:07.342: INFO: Pod "alpine-nnp-false-c3e6806c-ee96-434b-a880-c525d3db4a20": Phase="Pending", Reason="", readiness=false. Elapsed: 28.504ms Jul 27 10:46:09.401: INFO: Pod "alpine-nnp-false-c3e6806c-ee96-434b-a880-c525d3db4a20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087664689s Jul 27 10:46:11.909: INFO: Pod "alpine-nnp-false-c3e6806c-ee96-434b-a880-c525d3db4a20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.595475915s Jul 27 10:46:13.913: INFO: Pod "alpine-nnp-false-c3e6806c-ee96-434b-a880-c525d3db4a20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.599680832s Jul 27 10:46:13.913: INFO: Pod "alpine-nnp-false-c3e6806c-ee96-434b-a880-c525d3db4a20" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:46:13.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1640" for this suite. • [SLOW TEST:7.117 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":47,"skipped":907,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:46:13.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-upd-ad50f1fa-65f1-4a01-b71e-40abebdfa892 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-ad50f1fa-65f1-4a01-b71e-40abebdfa892 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:46:20.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-675" for this suite. • [SLOW TEST:6.191 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":914,"failed":0} S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:46:20.119: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-a1404f33-7db5-4566-8990-bbbf249050f0 in namespace container-probe-431 Jul 27 10:46:24.248: INFO: Started pod liveness-a1404f33-7db5-4566-8990-bbbf249050f0 in namespace container-probe-431 STEP: checking the pod's current state and verifying that restartCount is present Jul 27 10:46:24.250: INFO: Initial restart count of pod liveness-a1404f33-7db5-4566-8990-bbbf249050f0 is 0 Jul 27 10:46:38.310: INFO: Restart count of pod container-probe-431/liveness-a1404f33-7db5-4566-8990-bbbf249050f0 is now 1 (14.0593107s elapsed) Jul 27 10:46:58.374: INFO: Restart count of pod container-probe-431/liveness-a1404f33-7db5-4566-8990-bbbf249050f0 is now 2 (34.123987588s elapsed) Jul 27 10:47:18.892: INFO: Restart count of pod container-probe-431/liveness-a1404f33-7db5-4566-8990-bbbf249050f0 is now 3 (54.641427553s elapsed) Jul 27 10:47:39.131: INFO: Restart count of pod container-probe-431/liveness-a1404f33-7db5-4566-8990-bbbf249050f0 is now 4 (1m14.880051817s elapsed) Jul 27 10:48:41.425: INFO: Restart count of pod container-probe-431/liveness-a1404f33-7db5-4566-8990-bbbf249050f0 is now 5 (2m17.174885997s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:48:41.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-431" for this suite. • [SLOW TEST:141.371 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":915,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:48:41.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7433, will wait for the garbage collector to delete the pods Jul 27 10:48:47.603: INFO: Deleting Job.batch foo took: 7.692822ms Jul 27 10:48:48.004: INFO: Terminating Job.batch foo pods took: 400.330532ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:49:33.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7433" for this suite. • [SLOW TEST:52.050 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":50,"skipped":924,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:49:33.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:49:37.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1379" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":940,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:49:37.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 27 10:49:38.338: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 27 10:49:40.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443778, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443778, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443778, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731443778, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 27 10:49:44.056: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:49:56.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9247" for this suite. STEP: Destroying namespace "webhook-9247-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.101 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":52,"skipped":989,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:49:56.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Jul 27 10:49:56.902: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 27 10:49:56.926: INFO: Waiting for terminating namespaces to be deleted... Jul 27 10:49:56.929: INFO: Logging pods the kubelet thinks is on node kali-worker before test Jul 27 10:49:56.935: INFO: rally-4c52f974-tk0kzax4-fxpwg from c-rally-4c52f974-2fn6was7 started at 2020-07-27 10:49:49 +0000 UTC (1 container statuses recorded) Jul 27 10:49:56.935: INFO: Container rally-4c52f974-tk0kzax4 ready: false, restart count 0 Jul 27 10:49:56.935: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Jul 27 10:49:56.935: INFO: Container kindnet-cni ready: true, restart count 1 Jul 27 10:49:56.935: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Jul 27 10:49:56.935: INFO: Container kube-proxy ready: true, restart count 0 Jul 27 10:49:56.935: INFO: rally-4c52f974-tk0kzax4 from c-rally-4c52f974-2fn6was7 started at 2020-07-27 10:49:45 +0000 UTC (1 container statuses recorded) Jul 27 10:49:56.935: INFO: Container rally-4c52f974-tk0kzax4 ready: true, restart count 0 Jul 27 10:49:56.935: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Jul 27 10:49:56.954: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Jul 27 10:49:56.954: INFO: Container kube-proxy ready: true, restart count 0 Jul 27 10:49:56.954: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Jul 27 10:49:56.954: INFO: Container kindnet-cni ready: true, restart count 1 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-508fd620-0191-47ea-b9fa-7c69fbe5fdb1 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-508fd620-0191-47ea-b9fa-7c69fbe5fdb1 off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-508fd620-0191-47ea-b9fa-7c69fbe5fdb1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:55:05.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1064" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:308.494 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":53,"skipped":1025,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:55:05.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on node default medium Jul 27 10:55:05.330: INFO: Waiting up to 5m0s for pod "pod-071d1e6d-9103-43b3-8466-33abe5cbbefa" in namespace "emptydir-5063" to be "Succeeded or Failed" Jul 27 10:55:05.400: INFO: Pod "pod-071d1e6d-9103-43b3-8466-33abe5cbbefa": Phase="Pending", Reason="", readiness=false. Elapsed: 69.060401ms Jul 27 10:55:07.404: INFO: Pod "pod-071d1e6d-9103-43b3-8466-33abe5cbbefa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073496251s Jul 27 10:55:09.408: INFO: Pod "pod-071d1e6d-9103-43b3-8466-33abe5cbbefa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077031215s STEP: Saw pod success Jul 27 10:55:09.408: INFO: Pod "pod-071d1e6d-9103-43b3-8466-33abe5cbbefa" satisfied condition "Succeeded or Failed" Jul 27 10:55:09.410: INFO: Trying to get logs from node kali-worker pod pod-071d1e6d-9103-43b3-8466-33abe5cbbefa container test-container: STEP: delete the pod Jul 27 10:55:09.496: INFO: Waiting for pod pod-071d1e6d-9103-43b3-8466-33abe5cbbefa to disappear Jul 27 10:55:09.505: INFO: Pod pod-071d1e6d-9103-43b3-8466-33abe5cbbefa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:55:09.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5063" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":54,"skipped":1061,"failed":0} ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:55:09.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:55:17.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8553" for this suite. • [SLOW TEST:8.160 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:79 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":55,"skipped":1061,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:55:17.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Jul 27 10:55:17.719: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3621' Jul 27 10:55:21.175: INFO: stderr: "" Jul 27 10:55:21.175: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 27 10:55:21.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3621' Jul 27 10:55:21.296: INFO: stderr: "" Jul 27 10:55:21.296: INFO: stdout: "update-demo-nautilus-fkbt8 update-demo-nautilus-m9nkj " Jul 27 10:55:21.296: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fkbt8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3621' Jul 27 10:55:21.500: INFO: stderr: "" Jul 27 10:55:21.500: INFO: stdout: "" Jul 27 10:55:21.500: INFO: update-demo-nautilus-fkbt8 is created but not running Jul 27 10:55:26.501: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3621' Jul 27 10:55:26.620: INFO: stderr: "" Jul 27 10:55:26.620: INFO: stdout: "update-demo-nautilus-fkbt8 update-demo-nautilus-m9nkj " Jul 27 10:55:26.620: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fkbt8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3621' Jul 27 10:55:26.718: INFO: stderr: "" Jul 27 10:55:26.718: INFO: stdout: "true" Jul 27 10:55:26.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-fkbt8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3621' Jul 27 10:55:26.832: INFO: stderr: "" Jul 27 10:55:26.832: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 27 10:55:26.832: INFO: validating pod update-demo-nautilus-fkbt8 Jul 27 10:55:26.836: INFO: got data: { "image": "nautilus.jpg" } Jul 27 10:55:26.836: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 27 10:55:26.836: INFO: update-demo-nautilus-fkbt8 is verified up and running Jul 27 10:55:26.837: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9nkj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3621' Jul 27 10:55:26.939: INFO: stderr: "" Jul 27 10:55:26.939: INFO: stdout: "true" Jul 27 10:55:26.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9nkj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3621' Jul 27 10:55:27.041: INFO: stderr: "" Jul 27 10:55:27.041: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 27 10:55:27.041: INFO: validating pod update-demo-nautilus-m9nkj Jul 27 10:55:27.045: INFO: got data: { "image": "nautilus.jpg" } Jul 27 10:55:27.045: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 27 10:55:27.045: INFO: update-demo-nautilus-m9nkj is verified up and running STEP: scaling down the replication controller Jul 27 10:55:27.048: INFO: scanned /root for discovery docs: Jul 27 10:55:27.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3621' Jul 27 10:55:28.174: INFO: stderr: "" Jul 27 10:55:28.174: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 27 10:55:28.175: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3621' Jul 27 10:55:28.367: INFO: stderr: "" Jul 27 10:55:28.367: INFO: stdout: "update-demo-nautilus-fkbt8 update-demo-nautilus-m9nkj " STEP: Replicas for name=update-demo: expected=1 actual=2 Jul 27 10:55:33.367: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3621' Jul 27 10:55:33.455: INFO: stderr: "" Jul 27 10:55:33.455: INFO: stdout: "update-demo-nautilus-m9nkj " Jul 27 10:55:33.455: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9nkj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3621' Jul 27 10:55:33.547: INFO: stderr: "" Jul 27 10:55:33.547: INFO: stdout: "true" Jul 27 10:55:33.547: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9nkj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3621' Jul 27 10:55:33.647: INFO: stderr: "" Jul 27 10:55:33.647: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 27 10:55:33.647: INFO: validating pod update-demo-nautilus-m9nkj Jul 27 10:55:33.649: INFO: got data: { "image": "nautilus.jpg" } Jul 27 10:55:33.649: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 27 10:55:33.649: INFO: update-demo-nautilus-m9nkj is verified up and running STEP: scaling up the replication controller Jul 27 10:55:33.651: INFO: scanned /root for discovery docs: Jul 27 10:55:33.651: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3621' Jul 27 10:55:34.849: INFO: stderr: "" Jul 27 10:55:34.849: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jul 27 10:55:34.850: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3621' Jul 27 10:55:34.953: INFO: stderr: "" Jul 27 10:55:34.953: INFO: stdout: "update-demo-nautilus-4czbh update-demo-nautilus-m9nkj " Jul 27 10:55:34.953: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4czbh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3621' Jul 27 10:55:35.040: INFO: stderr: "" Jul 27 10:55:35.040: INFO: stdout: "" Jul 27 10:55:35.040: INFO: update-demo-nautilus-4czbh is created but not running Jul 27 10:55:40.040: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3621' Jul 27 10:55:40.155: INFO: stderr: "" Jul 27 10:55:40.155: INFO: stdout: "update-demo-nautilus-4czbh update-demo-nautilus-m9nkj " Jul 27 10:55:40.155: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4czbh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3621' Jul 27 10:55:40.262: INFO: stderr: "" Jul 27 10:55:40.262: INFO: stdout: "true" Jul 27 10:55:40.262: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4czbh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3621' Jul 27 10:55:40.361: INFO: stderr: "" Jul 27 10:55:40.361: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 27 10:55:40.361: INFO: validating pod update-demo-nautilus-4czbh Jul 27 10:55:40.366: INFO: got data: { "image": "nautilus.jpg" } Jul 27 10:55:40.366: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 27 10:55:40.366: INFO: update-demo-nautilus-4czbh is verified up and running Jul 27 10:55:40.366: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9nkj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3621' Jul 27 10:55:40.465: INFO: stderr: "" Jul 27 10:55:40.465: INFO: stdout: "true" Jul 27 10:55:40.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m9nkj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3621' Jul 27 10:55:40.565: INFO: stderr: "" Jul 27 10:55:40.565: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jul 27 10:55:40.565: INFO: validating pod update-demo-nautilus-m9nkj Jul 27 10:55:40.568: INFO: got data: { "image": "nautilus.jpg" } Jul 27 10:55:40.568: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jul 27 10:55:40.568: INFO: update-demo-nautilus-m9nkj is verified up and running STEP: using delete to clean up resources Jul 27 10:55:40.568: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3621' Jul 27 10:55:40.678: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 27 10:55:40.678: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jul 27 10:55:40.678: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3621' Jul 27 10:55:40.789: INFO: stderr: "No resources found in kubectl-3621 namespace.\n" Jul 27 10:55:40.790: INFO: stdout: "" Jul 27 10:55:40.790: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3621 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 27 10:55:40.900: INFO: stderr: "" Jul 27 10:55:40.901: INFO: stdout: "update-demo-nautilus-4czbh\nupdate-demo-nautilus-m9nkj\n" Jul 27 10:55:41.401: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3621' Jul 27 10:55:41.791: INFO: stderr: "No resources found in kubectl-3621 namespace.\n" Jul 27 10:55:41.791: INFO: stdout: "" Jul 27 10:55:41.791: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3621 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 27 10:55:41.892: INFO: stderr: "" Jul 27 10:55:41.892: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:55:41.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3621" for this suite. • [SLOW TEST:24.226 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":275,"completed":56,"skipped":1070,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:55:41.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:55:42.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9177" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":57,"skipped":1077,"failed":0} SS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:55:42.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-1767ea8d-5004-4154-a394-04b3e910e70f STEP: Creating secret with name s-test-opt-upd-6e0ca424-7b30-4174-8528-fadbf56850d8 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-1767ea8d-5004-4154-a394-04b3e910e70f STEP: Updating secret s-test-opt-upd-6e0ca424-7b30-4174-8528-fadbf56850d8 STEP: Creating secret with name s-test-opt-create-1a879cd4-c405-4b22-ab37-f50d28511828 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:57:23.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4345" for this suite. • [SLOW TEST:101.064 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":58,"skipped":1079,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:57:23.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-downwardapi-z4jf STEP: Creating a pod to test atomic-volume-subpath Jul 27 10:57:23.558: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-z4jf" in namespace "subpath-2074" to be "Succeeded or Failed" Jul 27 10:57:23.568: INFO: Pod "pod-subpath-test-downwardapi-z4jf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.556507ms Jul 27 10:57:25.572: INFO: Pod "pod-subpath-test-downwardapi-z4jf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013613334s Jul 27 10:57:27.628: INFO: Pod "pod-subpath-test-downwardapi-z4jf": Phase="Running", Reason="", readiness=true. Elapsed: 4.070322755s Jul 27 10:57:29.633: INFO: Pod "pod-subpath-test-downwardapi-z4jf": Phase="Running", Reason="", readiness=true. Elapsed: 6.074447805s Jul 27 10:57:31.636: INFO: Pod "pod-subpath-test-downwardapi-z4jf": Phase="Running", Reason="", readiness=true. Elapsed: 8.078021377s Jul 27 10:57:33.641: INFO: Pod "pod-subpath-test-downwardapi-z4jf": Phase="Running", Reason="", readiness=true. Elapsed: 10.082452636s Jul 27 10:57:35.664: INFO: Pod "pod-subpath-test-downwardapi-z4jf": Phase="Running", Reason="", readiness=true. Elapsed: 12.105709112s Jul 27 10:57:37.667: INFO: Pod "pod-subpath-test-downwardapi-z4jf": Phase="Running", Reason="", readiness=true. Elapsed: 14.108866261s Jul 27 10:57:39.698: INFO: Pod "pod-subpath-test-downwardapi-z4jf": Phase="Running", Reason="", readiness=true. Elapsed: 16.13999074s Jul 27 10:57:41.724: INFO: Pod "pod-subpath-test-downwardapi-z4jf": Phase="Running", Reason="", readiness=true. Elapsed: 18.16558054s Jul 27 10:57:43.729: INFO: Pod "pod-subpath-test-downwardapi-z4jf": Phase="Running", Reason="", readiness=true. Elapsed: 20.170599787s Jul 27 10:57:45.736: INFO: Pod "pod-subpath-test-downwardapi-z4jf": Phase="Running", Reason="", readiness=true. Elapsed: 22.177772469s Jul 27 10:57:47.740: INFO: Pod "pod-subpath-test-downwardapi-z4jf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.181841174s STEP: Saw pod success Jul 27 10:57:47.740: INFO: Pod "pod-subpath-test-downwardapi-z4jf" satisfied condition "Succeeded or Failed" Jul 27 10:57:47.743: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-z4jf container test-container-subpath-downwardapi-z4jf: STEP: delete the pod Jul 27 10:57:47.828: INFO: Waiting for pod pod-subpath-test-downwardapi-z4jf to disappear Jul 27 10:57:47.856: INFO: Pod pod-subpath-test-downwardapi-z4jf no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-z4jf Jul 27 10:57:47.856: INFO: Deleting pod "pod-subpath-test-downwardapi-z4jf" in namespace "subpath-2074" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:57:47.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2074" for this suite. • [SLOW TEST:24.452 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":59,"skipped":1080,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:57:47.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating replication controller my-hostname-basic-cdaa7928-5a57-49d5-8c48-17b0636e2e54 Jul 27 10:57:47.987: INFO: Pod name my-hostname-basic-cdaa7928-5a57-49d5-8c48-17b0636e2e54: Found 0 pods out of 1 Jul 27 10:57:52.991: INFO: Pod name my-hostname-basic-cdaa7928-5a57-49d5-8c48-17b0636e2e54: Found 1 pods out of 1 Jul 27 10:57:52.991: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-cdaa7928-5a57-49d5-8c48-17b0636e2e54" are running Jul 27 10:57:52.994: INFO: Pod "my-hostname-basic-cdaa7928-5a57-49d5-8c48-17b0636e2e54-xmwrg" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-27 10:57:48 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-27 10:57:51 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-27 10:57:51 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-27 10:57:47 +0000 UTC Reason: Message:}]) Jul 27 10:57:52.994: INFO: Trying to dial the pod Jul 27 10:57:58.005: INFO: Controller my-hostname-basic-cdaa7928-5a57-49d5-8c48-17b0636e2e54: Got expected result from replica 1 [my-hostname-basic-cdaa7928-5a57-49d5-8c48-17b0636e2e54-xmwrg]: "my-hostname-basic-cdaa7928-5a57-49d5-8c48-17b0636e2e54-xmwrg", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:57:58.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5094" for this suite. • [SLOW TEST:10.145 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":60,"skipped":1100,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:57:58.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:58:14.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8055" for this suite. • [SLOW TEST:16.408 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":61,"skipped":1104,"failed":0} SSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:58:14.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jul 27 10:58:14.481: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:14.530: INFO: Number of nodes with available pods: 0 Jul 27 10:58:14.530: INFO: Node kali-worker is running more than one daemon pod Jul 27 10:58:15.536: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:15.641: INFO: Number of nodes with available pods: 0 Jul 27 10:58:15.641: INFO: Node kali-worker is running more than one daemon pod Jul 27 10:58:16.535: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:16.537: INFO: Number of nodes with available pods: 0 Jul 27 10:58:16.537: INFO: Node kali-worker is running more than one daemon pod Jul 27 10:58:17.536: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:17.540: INFO: Number of nodes with available pods: 0 Jul 27 10:58:17.540: INFO: Node kali-worker is running more than one daemon pod Jul 27 10:58:18.536: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:18.540: INFO: Number of nodes with available pods: 2 Jul 27 10:58:18.540: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jul 27 10:58:18.577: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:18.581: INFO: Number of nodes with available pods: 1 Jul 27 10:58:18.581: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:19.594: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:19.599: INFO: Number of nodes with available pods: 1 Jul 27 10:58:19.599: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:20.586: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:20.591: INFO: Number of nodes with available pods: 1 Jul 27 10:58:20.591: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:21.586: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:21.591: INFO: Number of nodes with available pods: 1 Jul 27 10:58:21.591: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:22.585: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:22.589: INFO: Number of nodes with available pods: 1 Jul 27 10:58:22.589: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:23.585: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:23.588: INFO: Number of nodes with available pods: 1 Jul 27 10:58:23.588: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:24.586: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:24.590: INFO: Number of nodes with available pods: 1 Jul 27 10:58:24.590: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:25.586: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:25.590: INFO: Number of nodes with available pods: 1 Jul 27 10:58:25.590: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:26.586: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:26.590: INFO: Number of nodes with available pods: 1 Jul 27 10:58:26.590: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:27.585: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:27.589: INFO: Number of nodes with available pods: 1 Jul 27 10:58:27.589: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:28.587: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:28.590: INFO: Number of nodes with available pods: 1 Jul 27 10:58:28.590: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:29.587: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:29.591: INFO: Number of nodes with available pods: 1 Jul 27 10:58:29.591: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:30.585: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:30.589: INFO: Number of nodes with available pods: 1 Jul 27 10:58:30.589: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:31.821: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:31.865: INFO: Number of nodes with available pods: 1 Jul 27 10:58:31.865: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:32.586: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:32.590: INFO: Number of nodes with available pods: 1 Jul 27 10:58:32.591: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:33.614: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:33.618: INFO: Number of nodes with available pods: 1 Jul 27 10:58:33.618: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:34.586: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:34.590: INFO: Number of nodes with available pods: 1 Jul 27 10:58:34.590: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:35.586: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:35.589: INFO: Number of nodes with available pods: 1 Jul 27 10:58:35.589: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:36.595: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:36.597: INFO: Number of nodes with available pods: 1 Jul 27 10:58:36.597: INFO: Node kali-worker2 is running more than one daemon pod Jul 27 10:58:37.585: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jul 27 10:58:37.587: INFO: Number of nodes with available pods: 2 Jul 27 10:58:37.587: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2330, will wait for the garbage collector to delete the pods Jul 27 10:58:37.648: INFO: Deleting DaemonSet.extensions daemon-set took: 6.15409ms Jul 27 10:58:38.048: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.218331ms Jul 27 10:58:53.351: INFO: Number of nodes with available pods: 0 Jul 27 10:58:53.351: INFO: Number of running nodes: 0, number of available pods: 0 Jul 27 10:58:53.357: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2330/daemonsets","resourceVersion":"4552132"},"items":null} Jul 27 10:58:53.360: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2330/pods","resourceVersion":"4552132"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:58:53.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2330" for this suite. • [SLOW TEST:38.980 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":62,"skipped":1107,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:58:53.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:59:10.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6010" for this suite. • [SLOW TEST:16.980 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":63,"skipped":1139,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:59:10.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating cluster-info Jul 27 10:59:10.463: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config cluster-info' Jul 27 10:59:10.557: INFO: stderr: "" Jul 27 10:59:10.558: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:59:10.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2291" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":275,"completed":64,"skipped":1163,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:59:10.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Jul 27 10:59:10.681: INFO: Waiting up to 5m0s for pod "downward-api-d7518eb1-65a6-4cb5-a6f3-eb6d3922e26f" in namespace "downward-api-644" to be "Succeeded or Failed" Jul 27 10:59:10.684: INFO: Pod "downward-api-d7518eb1-65a6-4cb5-a6f3-eb6d3922e26f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.472074ms Jul 27 10:59:12.689: INFO: Pod "downward-api-d7518eb1-65a6-4cb5-a6f3-eb6d3922e26f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007327647s Jul 27 10:59:14.694: INFO: Pod "downward-api-d7518eb1-65a6-4cb5-a6f3-eb6d3922e26f": Phase="Running", Reason="", readiness=true. Elapsed: 4.01212694s Jul 27 10:59:16.699: INFO: Pod "downward-api-d7518eb1-65a6-4cb5-a6f3-eb6d3922e26f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017115497s STEP: Saw pod success Jul 27 10:59:16.699: INFO: Pod "downward-api-d7518eb1-65a6-4cb5-a6f3-eb6d3922e26f" satisfied condition "Succeeded or Failed" Jul 27 10:59:16.702: INFO: Trying to get logs from node kali-worker pod downward-api-d7518eb1-65a6-4cb5-a6f3-eb6d3922e26f container dapi-container: STEP: delete the pod Jul 27 10:59:16.722: INFO: Waiting for pod downward-api-d7518eb1-65a6-4cb5-a6f3-eb6d3922e26f to disappear Jul 27 10:59:16.772: INFO: Pod downward-api-d7518eb1-65a6-4cb5-a6f3-eb6d3922e26f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:59:16.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-644" for this suite. • [SLOW TEST:6.193 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":65,"skipped":1168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:59:16.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 10:59:20.914: INFO: Waiting up to 5m0s for pod "client-envvars-1a66e64e-207b-4021-8ea8-91b48c47aa4c" in namespace "pods-6958" to be "Succeeded or Failed" Jul 27 10:59:20.964: INFO: Pod "client-envvars-1a66e64e-207b-4021-8ea8-91b48c47aa4c": Phase="Pending", Reason="", readiness=false. Elapsed: 50.156553ms Jul 27 10:59:23.036: INFO: Pod "client-envvars-1a66e64e-207b-4021-8ea8-91b48c47aa4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122185486s Jul 27 10:59:25.040: INFO: Pod "client-envvars-1a66e64e-207b-4021-8ea8-91b48c47aa4c": Phase="Running", Reason="", readiness=true. Elapsed: 4.125972267s Jul 27 10:59:27.044: INFO: Pod "client-envvars-1a66e64e-207b-4021-8ea8-91b48c47aa4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.130262684s STEP: Saw pod success Jul 27 10:59:27.044: INFO: Pod "client-envvars-1a66e64e-207b-4021-8ea8-91b48c47aa4c" satisfied condition "Succeeded or Failed" Jul 27 10:59:27.048: INFO: Trying to get logs from node kali-worker2 pod client-envvars-1a66e64e-207b-4021-8ea8-91b48c47aa4c container env3cont: STEP: delete the pod Jul 27 10:59:27.242: INFO: Waiting for pod client-envvars-1a66e64e-207b-4021-8ea8-91b48c47aa4c to disappear Jul 27 10:59:27.253: INFO: Pod client-envvars-1a66e64e-207b-4021-8ea8-91b48c47aa4c no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:59:27.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6958" for this suite. • [SLOW TEST:10.480 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":66,"skipped":1219,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:59:27.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-configmap-ctqt STEP: Creating a pod to test atomic-volume-subpath Jul 27 10:59:27.368: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-ctqt" in namespace "subpath-6683" to be "Succeeded or Failed" Jul 27 10:59:27.395: INFO: Pod "pod-subpath-test-configmap-ctqt": Phase="Pending", Reason="", readiness=false. Elapsed: 27.838011ms Jul 27 10:59:29.400: INFO: Pod "pod-subpath-test-configmap-ctqt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032003219s Jul 27 10:59:31.404: INFO: Pod "pod-subpath-test-configmap-ctqt": Phase="Running", Reason="", readiness=true. Elapsed: 4.036177885s Jul 27 10:59:33.408: INFO: Pod "pod-subpath-test-configmap-ctqt": Phase="Running", Reason="", readiness=true. Elapsed: 6.040184071s Jul 27 10:59:35.413: INFO: Pod "pod-subpath-test-configmap-ctqt": Phase="Running", Reason="", readiness=true. Elapsed: 8.045132797s Jul 27 10:59:37.417: INFO: Pod "pod-subpath-test-configmap-ctqt": Phase="Running", Reason="", readiness=true. Elapsed: 10.049272749s Jul 27 10:59:39.461: INFO: Pod "pod-subpath-test-configmap-ctqt": Phase="Running", Reason="", readiness=true. Elapsed: 12.093715837s Jul 27 10:59:41.466: INFO: Pod "pod-subpath-test-configmap-ctqt": Phase="Running", Reason="", readiness=true. Elapsed: 14.098616492s Jul 27 10:59:43.497: INFO: Pod "pod-subpath-test-configmap-ctqt": Phase="Running", Reason="", readiness=true. Elapsed: 16.129792126s Jul 27 10:59:45.501: INFO: Pod "pod-subpath-test-configmap-ctqt": Phase="Running", Reason="", readiness=true. Elapsed: 18.133674983s Jul 27 10:59:47.505: INFO: Pod "pod-subpath-test-configmap-ctqt": Phase="Running", Reason="", readiness=true. Elapsed: 20.137489302s Jul 27 10:59:49.875: INFO: Pod "pod-subpath-test-configmap-ctqt": Phase="Running", Reason="", readiness=true. Elapsed: 22.507435407s Jul 27 10:59:51.879: INFO: Pod "pod-subpath-test-configmap-ctqt": Phase="Running", Reason="", readiness=true. Elapsed: 24.511644797s Jul 27 10:59:53.905: INFO: Pod "pod-subpath-test-configmap-ctqt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.537356406s STEP: Saw pod success Jul 27 10:59:53.905: INFO: Pod "pod-subpath-test-configmap-ctqt" satisfied condition "Succeeded or Failed" Jul 27 10:59:53.909: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-ctqt container test-container-subpath-configmap-ctqt: STEP: delete the pod Jul 27 10:59:53.978: INFO: Waiting for pod pod-subpath-test-configmap-ctqt to disappear Jul 27 10:59:54.003: INFO: Pod pod-subpath-test-configmap-ctqt no longer exists STEP: Deleting pod pod-subpath-test-configmap-ctqt Jul 27 10:59:54.004: INFO: Deleting pod "pod-subpath-test-configmap-ctqt" in namespace "subpath-6683" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:59:54.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6683" for this suite. • [SLOW TEST:26.752 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":67,"skipped":1240,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:59:54.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-a1623060-a463-4fe8-99c2-f822e670dcf9 STEP: Creating a pod to test consume secrets Jul 27 10:59:54.107: INFO: Waiting up to 5m0s for pod "pod-secrets-aed25345-b33a-4eb5-aafe-d9aac48523eb" in namespace "secrets-9916" to be "Succeeded or Failed" Jul 27 10:59:54.130: INFO: Pod "pod-secrets-aed25345-b33a-4eb5-aafe-d9aac48523eb": Phase="Pending", Reason="", readiness=false. Elapsed: 22.677479ms Jul 27 10:59:56.133: INFO: Pod "pod-secrets-aed25345-b33a-4eb5-aafe-d9aac48523eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026054094s Jul 27 10:59:58.138: INFO: Pod "pod-secrets-aed25345-b33a-4eb5-aafe-d9aac48523eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030467107s STEP: Saw pod success Jul 27 10:59:58.138: INFO: Pod "pod-secrets-aed25345-b33a-4eb5-aafe-d9aac48523eb" satisfied condition "Succeeded or Failed" Jul 27 10:59:58.141: INFO: Trying to get logs from node kali-worker pod pod-secrets-aed25345-b33a-4eb5-aafe-d9aac48523eb container secret-volume-test: STEP: delete the pod Jul 27 10:59:58.199: INFO: Waiting for pod pod-secrets-aed25345-b33a-4eb5-aafe-d9aac48523eb to disappear Jul 27 10:59:58.205: INFO: Pod pod-secrets-aed25345-b33a-4eb5-aafe-d9aac48523eb no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 10:59:58.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9916" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1363,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 10:59:58.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 27 10:59:58.931: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 27 11:00:01.122: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444398, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444398, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444399, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444398, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 27 11:00:04.295: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 11:00:04.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:00:05.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-4260" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.526 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":69,"skipped":1363,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:00:05.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-a2e243ea-5d8d-40f4-9a5a-708b3523521b STEP: Creating configMap with name cm-test-opt-upd-9408a167-1c77-4a89-9815-e6c883553cee STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-a2e243ea-5d8d-40f4-9a5a-708b3523521b STEP: Updating configmap cm-test-opt-upd-9408a167-1c77-4a89-9815-e6c883553cee STEP: Creating configMap with name cm-test-opt-create-d3c15435-7b0d-4a7f-ba0a-ee25e4526185 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:00:16.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8246" for this suite. • [SLOW TEST:10.364 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":70,"skipped":1372,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:00:16.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Jul 27 11:00:16.154: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-1814 -- logs-generator --log-lines-total 100 --run-duration 20s' Jul 27 11:00:16.262: INFO: stderr: "" Jul 27 11:00:16.262: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Jul 27 11:00:16.262: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jul 27 11:00:16.262: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1814" to be "running and ready, or succeeded" Jul 27 11:00:16.280: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 18.175463ms Jul 27 11:00:18.602: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339746198s Jul 27 11:00:20.606: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.34399725s Jul 27 11:00:20.606: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jul 27 11:00:20.606: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jul 27 11:00:20.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1814' Jul 27 11:00:20.714: INFO: stderr: "" Jul 27 11:00:20.714: INFO: stdout: "I0727 11:00:19.498924 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/445 350\nI0727 11:00:19.699065 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/5xxt 297\nI0727 11:00:19.899223 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/qzc 301\nI0727 11:00:20.099193 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/2wrj 582\nI0727 11:00:20.299117 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/dt97 516\nI0727 11:00:20.499123 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/4qzs 363\nI0727 11:00:20.699088 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/mk78 476\n" STEP: limiting log lines Jul 27 11:00:20.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1814 --tail=1' Jul 27 11:00:20.828: INFO: stderr: "" Jul 27 11:00:20.828: INFO: stdout: "I0727 11:00:20.699088 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/mk78 476\n" Jul 27 11:00:20.828: INFO: got output "I0727 11:00:20.699088 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/mk78 476\n" STEP: limiting log bytes Jul 27 11:00:20.828: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1814 --limit-bytes=1' Jul 27 11:00:20.950: INFO: stderr: "" Jul 27 11:00:20.950: INFO: stdout: "I" Jul 27 11:00:20.950: INFO: got output "I" STEP: exposing timestamps Jul 27 11:00:20.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1814 --tail=1 --timestamps' Jul 27 11:00:21.062: INFO: stderr: "" Jul 27 11:00:21.062: INFO: stdout: "2020-07-27T11:00:20.899278348Z I0727 11:00:20.899117 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/rftj 295\n" Jul 27 11:00:21.062: INFO: got output "2020-07-27T11:00:20.899278348Z I0727 11:00:20.899117 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/rftj 295\n" STEP: restricting to a time range Jul 27 11:00:23.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1814 --since=1s' Jul 27 11:00:23.780: INFO: stderr: "" Jul 27 11:00:23.780: INFO: stdout: "I0727 11:00:22.699087 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/tsn 319\nI0727 11:00:22.899093 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/n6js 514\nI0727 11:00:23.099093 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/dbz 577\nI0727 11:00:23.299109 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/mrm 491\nI0727 11:00:23.499101 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/ctvn 308\nI0727 11:00:23.699116 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/csww 266\n" Jul 27 11:00:23.780: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1814 --since=24h' Jul 27 11:00:23.917: INFO: stderr: "" Jul 27 11:00:23.917: INFO: stdout: "I0727 11:00:19.498924 1 logs_generator.go:76] 0 GET /api/v1/namespaces/ns/pods/445 350\nI0727 11:00:19.699065 1 logs_generator.go:76] 1 GET /api/v1/namespaces/default/pods/5xxt 297\nI0727 11:00:19.899223 1 logs_generator.go:76] 2 GET /api/v1/namespaces/ns/pods/qzc 301\nI0727 11:00:20.099193 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/2wrj 582\nI0727 11:00:20.299117 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/kube-system/pods/dt97 516\nI0727 11:00:20.499123 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/ns/pods/4qzs 363\nI0727 11:00:20.699088 1 logs_generator.go:76] 6 PUT /api/v1/namespaces/ns/pods/mk78 476\nI0727 11:00:20.899117 1 logs_generator.go:76] 7 GET /api/v1/namespaces/default/pods/rftj 295\nI0727 11:00:21.099175 1 logs_generator.go:76] 8 GET /api/v1/namespaces/ns/pods/gcwg 281\nI0727 11:00:21.299140 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/xx2 538\nI0727 11:00:21.499120 1 logs_generator.go:76] 10 GET /api/v1/namespaces/default/pods/sp8 272\nI0727 11:00:21.699098 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/hvgr 374\nI0727 11:00:21.899128 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/5xjr 237\nI0727 11:00:22.099090 1 logs_generator.go:76] 13 GET /api/v1/namespaces/ns/pods/nsjx 453\nI0727 11:00:22.299090 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/x69 596\nI0727 11:00:22.499104 1 logs_generator.go:76] 15 GET /api/v1/namespaces/ns/pods/pgp 254\nI0727 11:00:22.699087 1 logs_generator.go:76] 16 POST /api/v1/namespaces/ns/pods/tsn 319\nI0727 11:00:22.899093 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/n6js 514\nI0727 11:00:23.099093 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/dbz 577\nI0727 11:00:23.299109 1 logs_generator.go:76] 19 POST /api/v1/namespaces/ns/pods/mrm 491\nI0727 11:00:23.499101 1 logs_generator.go:76] 20 PUT /api/v1/namespaces/kube-system/pods/ctvn 308\nI0727 11:00:23.699116 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/csww 266\nI0727 11:00:23.899090 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/ss8 437\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Jul 27 11:00:23.918: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1814' Jul 27 11:00:33.526: INFO: stderr: "" Jul 27 11:00:33.526: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:00:33.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1814" for this suite. • [SLOW TEST:17.555 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":71,"skipped":1398,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:00:33.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:00:50.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1763" for this suite. • [SLOW TEST:17.281 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":72,"skipped":1486,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:00:50.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override command Jul 27 11:00:51.161: INFO: Waiting up to 5m0s for pod "client-containers-0fc8ef10-8cd5-4c73-bab8-5129c727306e" in namespace "containers-13" to be "Succeeded or Failed" Jul 27 11:00:51.165: INFO: Pod "client-containers-0fc8ef10-8cd5-4c73-bab8-5129c727306e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.611689ms Jul 27 11:00:53.169: INFO: Pod "client-containers-0fc8ef10-8cd5-4c73-bab8-5129c727306e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007500869s Jul 27 11:00:55.173: INFO: Pod "client-containers-0fc8ef10-8cd5-4c73-bab8-5129c727306e": Phase="Running", Reason="", readiness=true. Elapsed: 4.011533304s Jul 27 11:00:57.176: INFO: Pod "client-containers-0fc8ef10-8cd5-4c73-bab8-5129c727306e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014944832s STEP: Saw pod success Jul 27 11:00:57.177: INFO: Pod "client-containers-0fc8ef10-8cd5-4c73-bab8-5129c727306e" satisfied condition "Succeeded or Failed" Jul 27 11:00:57.179: INFO: Trying to get logs from node kali-worker2 pod client-containers-0fc8ef10-8cd5-4c73-bab8-5129c727306e container test-container: STEP: delete the pod Jul 27 11:00:57.264: INFO: Waiting for pod client-containers-0fc8ef10-8cd5-4c73-bab8-5129c727306e to disappear Jul 27 11:00:57.273: INFO: Pod client-containers-0fc8ef10-8cd5-4c73-bab8-5129c727306e no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:00:57.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-13" for this suite. • [SLOW TEST:6.337 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1493,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:00:57.279: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jul 27 11:00:57.431: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7307 /api/v1/namespaces/watch-7307/configmaps/e2e-watch-test-resource-version f51e80a9-227c-4bf3-a5b5-d4247c7ba3ac 4552875 0 2020-07-27 11:00:57 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-07-27 11:00:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Jul 27 11:00:57.432: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-7307 /api/v1/namespaces/watch-7307/configmaps/e2e-watch-test-resource-version f51e80a9-227c-4bf3-a5b5-d4247c7ba3ac 4552876 0 2020-07-27 11:00:57 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] [{e2e.test Update v1 2020-07-27 11:00:57 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:00:57.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7307" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":74,"skipped":1505,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:00:57.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jul 27 11:01:02.120: INFO: Successfully updated pod "pod-update-481083c4-3d74-4c99-8eaa-6a60c96a0dbb" STEP: verifying the updated pod is in kubernetes Jul 27 11:01:02.159: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:01:02.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7099" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":75,"skipped":1520,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:01:02.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jul 27 11:01:07.305: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:01:07.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6626" for this suite. • [SLOW TEST:5.261 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":76,"skipped":1528,"failed":0} SSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:01:07.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: running the image docker.io/library/httpd:2.4.38-alpine Jul 27 11:01:07.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6309' Jul 27 11:01:08.037: INFO: stderr: "" Jul 27 11:01:08.037: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423 Jul 27 11:01:08.058: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-6309' Jul 27 11:01:11.860: INFO: stderr: "" Jul 27 11:01:11.860: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:01:11.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6309" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":275,"completed":77,"skipped":1532,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:01:11.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206 STEP: creating the pod Jul 27 11:01:12.008: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4713' Jul 27 11:01:12.340: INFO: stderr: "" Jul 27 11:01:12.340: INFO: stdout: "pod/pause created\n" Jul 27 11:01:12.340: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jul 27 11:01:12.340: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4713" to be "running and ready" Jul 27 11:01:12.346: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.661663ms Jul 27 11:01:14.366: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02538131s Jul 27 11:01:16.370: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029436243s Jul 27 11:01:18.374: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.033330677s Jul 27 11:01:18.374: INFO: Pod "pause" satisfied condition "running and ready" Jul 27 11:01:18.374: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: adding the label testing-label with value testing-label-value to a pod Jul 27 11:01:18.374: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4713' Jul 27 11:01:18.476: INFO: stderr: "" Jul 27 11:01:18.476: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jul 27 11:01:18.476: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4713' Jul 27 11:01:18.573: INFO: stderr: "" Jul 27 11:01:18.573: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s testing-label-value\n" STEP: removing the label testing-label of a pod Jul 27 11:01:18.573: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4713' Jul 27 11:01:18.679: INFO: stderr: "" Jul 27 11:01:18.679: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jul 27 11:01:18.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4713' Jul 27 11:01:18.817: INFO: stderr: "" Jul 27 11:01:18.817: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 6s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213 STEP: using delete to clean up resources Jul 27 11:01:18.817: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4713' Jul 27 11:01:18.941: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jul 27 11:01:18.941: INFO: stdout: "pod \"pause\" force deleted\n" Jul 27 11:01:18.941: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4713' Jul 27 11:01:19.299: INFO: stderr: "No resources found in kubectl-4713 namespace.\n" Jul 27 11:01:19.299: INFO: stdout: "" Jul 27 11:01:19.299: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4713 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jul 27 11:01:19.387: INFO: stderr: "" Jul 27 11:01:19.387: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:01:19.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4713" for this suite. • [SLOW TEST:7.519 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":275,"completed":78,"skipped":1539,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:01:19.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Jul 27 11:01:19.751: INFO: Waiting up to 5m0s for pod "pod-28cfe182-4ace-450c-9cdf-c7f62a4ef5d4" in namespace "emptydir-192" to be "Succeeded or Failed" Jul 27 11:01:19.798: INFO: Pod "pod-28cfe182-4ace-450c-9cdf-c7f62a4ef5d4": Phase="Pending", Reason="", readiness=false. Elapsed: 46.079937ms Jul 27 11:01:21.893: INFO: Pod "pod-28cfe182-4ace-450c-9cdf-c7f62a4ef5d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141863345s Jul 27 11:01:23.897: INFO: Pod "pod-28cfe182-4ace-450c-9cdf-c7f62a4ef5d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.145291545s Jul 27 11:01:25.911: INFO: Pod "pod-28cfe182-4ace-450c-9cdf-c7f62a4ef5d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.159498281s STEP: Saw pod success Jul 27 11:01:25.911: INFO: Pod "pod-28cfe182-4ace-450c-9cdf-c7f62a4ef5d4" satisfied condition "Succeeded or Failed" Jul 27 11:01:25.913: INFO: Trying to get logs from node kali-worker2 pod pod-28cfe182-4ace-450c-9cdf-c7f62a4ef5d4 container test-container: STEP: delete the pod Jul 27 11:01:25.947: INFO: Waiting for pod pod-28cfe182-4ace-450c-9cdf-c7f62a4ef5d4 to disappear Jul 27 11:01:25.951: INFO: Pod pod-28cfe182-4ace-450c-9cdf-c7f62a4ef5d4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:01:25.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-192" for this suite. • [SLOW TEST:6.565 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":79,"skipped":1556,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:01:25.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Jul 27 11:01:26.000: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jul 27 11:01:26.039: INFO: Waiting for terminating namespaces to be deleted... Jul 27 11:01:26.042: INFO: Logging pods the kubelet thinks is on node kali-worker before test Jul 27 11:01:26.047: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Jul 27 11:01:26.047: INFO: Container kindnet-cni ready: true, restart count 1 Jul 27 11:01:26.047: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Jul 27 11:01:26.047: INFO: Container kube-proxy ready: true, restart count 0 Jul 27 11:01:26.047: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Jul 27 11:01:26.052: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Jul 27 11:01:26.053: INFO: Container kube-proxy ready: true, restart count 0 Jul 27 11:01:26.053: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Jul 27 11:01:26.053: INFO: Container kindnet-cni ready: true, restart count 1 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-759793c6-2a18-4799-b44d-57285fb52c96 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-759793c6-2a18-4799-b44d-57285fb52c96 off the node kali-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-759793c6-2a18-4799-b44d-57285fb52c96 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:01:44.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8561" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:18.645 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":80,"skipped":1557,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:01:44.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 27 11:01:45.307: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 27 11:01:47.404: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444505, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444505, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444505, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444505, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 27 11:01:50.534: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:01:52.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6198" for this suite. STEP: Destroying namespace "webhook-6198-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.500 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":81,"skipped":1580,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:01:53.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Jul 27 11:01:53.151: INFO: Waiting up to 5m0s for pod "pod-52f1c088-b31f-42ca-a49d-4945507b0a3e" in namespace "emptydir-384" to be "Succeeded or Failed" Jul 27 11:01:53.277: INFO: Pod "pod-52f1c088-b31f-42ca-a49d-4945507b0a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 125.46516ms Jul 27 11:01:55.281: INFO: Pod "pod-52f1c088-b31f-42ca-a49d-4945507b0a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129762308s Jul 27 11:01:57.284: INFO: Pod "pod-52f1c088-b31f-42ca-a49d-4945507b0a3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13243726s Jul 27 11:01:59.289: INFO: Pod "pod-52f1c088-b31f-42ca-a49d-4945507b0a3e": Phase="Running", Reason="", readiness=true. Elapsed: 6.137274393s Jul 27 11:02:01.293: INFO: Pod "pod-52f1c088-b31f-42ca-a49d-4945507b0a3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.141442425s STEP: Saw pod success Jul 27 11:02:01.293: INFO: Pod "pod-52f1c088-b31f-42ca-a49d-4945507b0a3e" satisfied condition "Succeeded or Failed" Jul 27 11:02:01.296: INFO: Trying to get logs from node kali-worker pod pod-52f1c088-b31f-42ca-a49d-4945507b0a3e container test-container: STEP: delete the pod Jul 27 11:02:01.316: INFO: Waiting for pod pod-52f1c088-b31f-42ca-a49d-4945507b0a3e to disappear Jul 27 11:02:01.320: INFO: Pod pod-52f1c088-b31f-42ca-a49d-4945507b0a3e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:02:01.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-384" for this suite. • [SLOW TEST:8.222 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":82,"skipped":1612,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:02:01.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 27 11:02:01.444: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ee77811c-cd4f-4729-8550-2a906f98bed1" in namespace "downward-api-6806" to be "Succeeded or Failed" Jul 27 11:02:01.516: INFO: Pod "downwardapi-volume-ee77811c-cd4f-4729-8550-2a906f98bed1": Phase="Pending", Reason="", readiness=false. Elapsed: 71.942914ms Jul 27 11:02:03.600: INFO: Pod "downwardapi-volume-ee77811c-cd4f-4729-8550-2a906f98bed1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155878018s Jul 27 11:02:05.605: INFO: Pod "downwardapi-volume-ee77811c-cd4f-4729-8550-2a906f98bed1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.160549155s STEP: Saw pod success Jul 27 11:02:05.605: INFO: Pod "downwardapi-volume-ee77811c-cd4f-4729-8550-2a906f98bed1" satisfied condition "Succeeded or Failed" Jul 27 11:02:05.608: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-ee77811c-cd4f-4729-8550-2a906f98bed1 container client-container: STEP: delete the pod Jul 27 11:02:05.650: INFO: Waiting for pod downwardapi-volume-ee77811c-cd4f-4729-8550-2a906f98bed1 to disappear Jul 27 11:02:05.690: INFO: Pod downwardapi-volume-ee77811c-cd4f-4729-8550-2a906f98bed1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:02:05.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6806" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":83,"skipped":1620,"failed":0} SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:02:05.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jul 27 11:02:05.776: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the sample API server. Jul 27 11:02:06.516: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jul 27 11:02:09.120: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444526, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444526, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444526, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444526, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 11:02:11.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444526, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444526, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444526, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444526, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 11:02:13.740: INFO: Waited 610.981176ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:02:14.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7949" for this suite. • [SLOW TEST:8.620 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":84,"skipped":1622,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:02:14.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on tmpfs Jul 27 11:02:14.404: INFO: Waiting up to 5m0s for pod "pod-0da4ccbd-cb60-4729-9b53-ba92c26b0a58" in namespace "emptydir-5536" to be "Succeeded or Failed" Jul 27 11:02:14.408: INFO: Pod "pod-0da4ccbd-cb60-4729-9b53-ba92c26b0a58": Phase="Pending", Reason="", readiness=false. Elapsed: 3.660681ms Jul 27 11:02:16.412: INFO: Pod "pod-0da4ccbd-cb60-4729-9b53-ba92c26b0a58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00788639s Jul 27 11:02:18.416: INFO: Pod "pod-0da4ccbd-cb60-4729-9b53-ba92c26b0a58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011803599s STEP: Saw pod success Jul 27 11:02:18.416: INFO: Pod "pod-0da4ccbd-cb60-4729-9b53-ba92c26b0a58" satisfied condition "Succeeded or Failed" Jul 27 11:02:18.419: INFO: Trying to get logs from node kali-worker2 pod pod-0da4ccbd-cb60-4729-9b53-ba92c26b0a58 container test-container: STEP: delete the pod Jul 27 11:02:18.516: INFO: Waiting for pod pod-0da4ccbd-cb60-4729-9b53-ba92c26b0a58 to disappear Jul 27 11:02:18.690: INFO: Pod pod-0da4ccbd-cb60-4729-9b53-ba92c26b0a58 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:02:18.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5536" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1641,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:02:18.698: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jul 27 11:02:19.343: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:02:33.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-108" for this suite. • [SLOW TEST:14.716 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":86,"skipped":1697,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:02:33.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 11:02:33.517: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-8f308d33-0dc7-4885-85fc-8b1c18dc0377" in namespace "security-context-test-9083" to be "Succeeded or Failed" Jul 27 11:02:33.522: INFO: Pod "busybox-privileged-false-8f308d33-0dc7-4885-85fc-8b1c18dc0377": Phase="Pending", Reason="", readiness=false. Elapsed: 5.574348ms Jul 27 11:02:35.526: INFO: Pod "busybox-privileged-false-8f308d33-0dc7-4885-85fc-8b1c18dc0377": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008975302s Jul 27 11:02:37.702: INFO: Pod "busybox-privileged-false-8f308d33-0dc7-4885-85fc-8b1c18dc0377": Phase="Pending", Reason="", readiness=false. Elapsed: 4.185355362s Jul 27 11:02:39.706: INFO: Pod "busybox-privileged-false-8f308d33-0dc7-4885-85fc-8b1c18dc0377": Phase="Pending", Reason="", readiness=false. Elapsed: 6.188825699s Jul 27 11:02:41.710: INFO: Pod "busybox-privileged-false-8f308d33-0dc7-4885-85fc-8b1c18dc0377": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.192792708s Jul 27 11:02:41.710: INFO: Pod "busybox-privileged-false-8f308d33-0dc7-4885-85fc-8b1c18dc0377" satisfied condition "Succeeded or Failed" Jul 27 11:02:41.716: INFO: Got logs for pod "busybox-privileged-false-8f308d33-0dc7-4885-85fc-8b1c18dc0377": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:02:41.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-9083" for this suite. • [SLOW TEST:8.309 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1721,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:02:41.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-7310/secret-test-c2b47591-51bf-44e9-a229-4651b01e124e STEP: Creating a pod to test consume secrets Jul 27 11:02:42.021: INFO: Waiting up to 5m0s for pod "pod-configmaps-0d4805ea-79ec-4dbc-96b9-8ebefee04a6a" in namespace "secrets-7310" to be "Succeeded or Failed" Jul 27 11:02:42.041: INFO: Pod "pod-configmaps-0d4805ea-79ec-4dbc-96b9-8ebefee04a6a": Phase="Pending", Reason="", readiness=false. Elapsed: 19.599147ms Jul 27 11:02:44.158: INFO: Pod "pod-configmaps-0d4805ea-79ec-4dbc-96b9-8ebefee04a6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.136269479s Jul 27 11:02:46.739: INFO: Pod "pod-configmaps-0d4805ea-79ec-4dbc-96b9-8ebefee04a6a": Phase="Running", Reason="", readiness=true. Elapsed: 4.71699039s Jul 27 11:02:48.743: INFO: Pod "pod-configmaps-0d4805ea-79ec-4dbc-96b9-8ebefee04a6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.721681013s STEP: Saw pod success Jul 27 11:02:48.743: INFO: Pod "pod-configmaps-0d4805ea-79ec-4dbc-96b9-8ebefee04a6a" satisfied condition "Succeeded or Failed" Jul 27 11:02:48.746: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-0d4805ea-79ec-4dbc-96b9-8ebefee04a6a container env-test: STEP: delete the pod Jul 27 11:02:48.766: INFO: Waiting for pod pod-configmaps-0d4805ea-79ec-4dbc-96b9-8ebefee04a6a to disappear Jul 27 11:02:48.771: INFO: Pod pod-configmaps-0d4805ea-79ec-4dbc-96b9-8ebefee04a6a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:02:48.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7310" for this suite. • [SLOW TEST:7.070 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1734,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:02:48.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-607 STEP: creating a selector STEP: Creating the service pods in kubernetes Jul 27 11:02:48.896: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Jul 27 11:02:48.981: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 27 11:02:51.181: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 27 11:02:53.104: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Jul 27 11:02:54.985: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 27 11:02:56.988: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 27 11:02:58.985: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 27 11:03:00.985: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 27 11:03:02.985: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 27 11:03:04.985: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 27 11:03:06.985: INFO: The status of Pod netserver-0 is Running (Ready = false) Jul 27 11:03:08.985: INFO: The status of Pod netserver-0 is Running (Ready = true) Jul 27 11:03:08.999: INFO: The status of Pod netserver-1 is Running (Ready = false) Jul 27 11:03:11.003: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Jul 27 11:03:17.059: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.154 8081 | grep -v '^\s*$'] Namespace:pod-network-test-607 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 27 11:03:17.059: INFO: >>> kubeConfig: /root/.kube/config I0727 11:03:17.096936 7 log.go:172] (0xc002d66dc0) (0xc001a70aa0) Create stream I0727 11:03:17.096965 7 log.go:172] (0xc002d66dc0) (0xc001a70aa0) Stream added, broadcasting: 1 I0727 11:03:17.099477 7 log.go:172] (0xc002d66dc0) Reply frame received for 1 I0727 11:03:17.099535 7 log.go:172] (0xc002d66dc0) (0xc000c69720) Create stream I0727 11:03:17.099561 7 log.go:172] (0xc002d66dc0) (0xc000c69720) Stream added, broadcasting: 3 I0727 11:03:17.100974 7 log.go:172] (0xc002d66dc0) Reply frame received for 3 I0727 11:03:17.101060 7 log.go:172] (0xc002d66dc0) (0xc000c69860) Create stream I0727 11:03:17.101084 7 log.go:172] (0xc002d66dc0) (0xc000c69860) Stream added, broadcasting: 5 I0727 11:03:17.102237 7 log.go:172] (0xc002d66dc0) Reply frame received for 5 I0727 11:03:18.191686 7 log.go:172] (0xc002d66dc0) Data frame received for 3 I0727 11:03:18.191718 7 log.go:172] (0xc000c69720) (3) Data frame handling I0727 11:03:18.191742 7 log.go:172] (0xc000c69720) (3) Data frame sent I0727 11:03:18.191751 7 log.go:172] (0xc002d66dc0) Data frame received for 3 I0727 11:03:18.191766 7 log.go:172] (0xc000c69720) (3) Data frame handling I0727 11:03:18.192096 7 log.go:172] (0xc002d66dc0) Data frame received for 5 I0727 11:03:18.192115 7 log.go:172] (0xc000c69860) (5) Data frame handling I0727 11:03:18.194246 7 log.go:172] (0xc002d66dc0) Data frame received for 1 I0727 11:03:18.194261 7 log.go:172] (0xc001a70aa0) (1) Data frame handling I0727 11:03:18.194271 7 log.go:172] (0xc001a70aa0) (1) Data frame sent I0727 11:03:18.194279 7 log.go:172] (0xc002d66dc0) (0xc001a70aa0) Stream removed, broadcasting: 1 I0727 11:03:18.194331 7 log.go:172] (0xc002d66dc0) (0xc001a70aa0) Stream removed, broadcasting: 1 I0727 11:03:18.194344 7 log.go:172] (0xc002d66dc0) (0xc000c69720) Stream removed, broadcasting: 3 I0727 11:03:18.194491 7 log.go:172] (0xc002d66dc0) (0xc000c69860) Stream removed, broadcasting: 5 Jul 27 11:03:18.194: INFO: Found all expected endpoints: [netserver-0] I0727 11:03:18.194619 7 log.go:172] (0xc002d66dc0) Go away received Jul 27 11:03:18.197: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.9 8081 | grep -v '^\s*$'] Namespace:pod-network-test-607 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jul 27 11:03:18.197: INFO: >>> kubeConfig: /root/.kube/config I0727 11:03:18.231403 7 log.go:172] (0xc002d674a0) (0xc001a71040) Create stream I0727 11:03:18.231425 7 log.go:172] (0xc002d674a0) (0xc001a71040) Stream added, broadcasting: 1 I0727 11:03:18.234032 7 log.go:172] (0xc002d674a0) Reply frame received for 1 I0727 11:03:18.234073 7 log.go:172] (0xc002d674a0) (0xc0004d4fa0) Create stream I0727 11:03:18.234085 7 log.go:172] (0xc002d674a0) (0xc0004d4fa0) Stream added, broadcasting: 3 I0727 11:03:18.235144 7 log.go:172] (0xc002d674a0) Reply frame received for 3 I0727 11:03:18.235199 7 log.go:172] (0xc002d674a0) (0xc000c69900) Create stream I0727 11:03:18.235215 7 log.go:172] (0xc002d674a0) (0xc000c69900) Stream added, broadcasting: 5 I0727 11:03:18.236121 7 log.go:172] (0xc002d674a0) Reply frame received for 5 I0727 11:03:19.340427 7 log.go:172] (0xc002d674a0) Data frame received for 3 I0727 11:03:19.340471 7 log.go:172] (0xc0004d4fa0) (3) Data frame handling I0727 11:03:19.340499 7 log.go:172] (0xc0004d4fa0) (3) Data frame sent I0727 11:03:19.340517 7 log.go:172] (0xc002d674a0) Data frame received for 3 I0727 11:03:19.340551 7 log.go:172] (0xc0004d4fa0) (3) Data frame handling I0727 11:03:19.340718 7 log.go:172] (0xc002d674a0) Data frame received for 5 I0727 11:03:19.340960 7 log.go:172] (0xc000c69900) (5) Data frame handling I0727 11:03:19.342960 7 log.go:172] (0xc002d674a0) Data frame received for 1 I0727 11:03:19.342988 7 log.go:172] (0xc001a71040) (1) Data frame handling I0727 11:03:19.343002 7 log.go:172] (0xc001a71040) (1) Data frame sent I0727 11:03:19.343156 7 log.go:172] (0xc002d674a0) (0xc001a71040) Stream removed, broadcasting: 1 I0727 11:03:19.343243 7 log.go:172] (0xc002d674a0) (0xc001a71040) Stream removed, broadcasting: 1 I0727 11:03:19.343280 7 log.go:172] (0xc002d674a0) (0xc0004d4fa0) Stream removed, broadcasting: 3 I0727 11:03:19.343426 7 log.go:172] (0xc002d674a0) Go away received I0727 11:03:19.343581 7 log.go:172] (0xc002d674a0) (0xc000c69900) Stream removed, broadcasting: 5 Jul 27 11:03:19.343: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:03:19.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-607" for this suite. • [SLOW TEST:30.558 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1751,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:03:19.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:03:35.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4263" for this suite. • [SLOW TEST:16.355 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":90,"skipped":1768,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:03:35.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-b1aba944-1ebb-4869-89d9-a689ba931959 STEP: Creating a pod to test consume secrets Jul 27 11:03:35.774: INFO: Waiting up to 5m0s for pod "pod-secrets-2393a463-6464-4d5c-8b2b-9dc1f5965536" in namespace "secrets-2250" to be "Succeeded or Failed" Jul 27 11:03:35.817: INFO: Pod "pod-secrets-2393a463-6464-4d5c-8b2b-9dc1f5965536": Phase="Pending", Reason="", readiness=false. Elapsed: 42.222718ms Jul 27 11:03:37.821: INFO: Pod "pod-secrets-2393a463-6464-4d5c-8b2b-9dc1f5965536": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046674612s Jul 27 11:03:39.840: INFO: Pod "pod-secrets-2393a463-6464-4d5c-8b2b-9dc1f5965536": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065269843s Jul 27 11:03:41.843: INFO: Pod "pod-secrets-2393a463-6464-4d5c-8b2b-9dc1f5965536": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068643324s STEP: Saw pod success Jul 27 11:03:41.843: INFO: Pod "pod-secrets-2393a463-6464-4d5c-8b2b-9dc1f5965536" satisfied condition "Succeeded or Failed" Jul 27 11:03:41.845: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-2393a463-6464-4d5c-8b2b-9dc1f5965536 container secret-volume-test: STEP: delete the pod Jul 27 11:03:41.927: INFO: Waiting for pod pod-secrets-2393a463-6464-4d5c-8b2b-9dc1f5965536 to disappear Jul 27 11:03:41.940: INFO: Pod pod-secrets-2393a463-6464-4d5c-8b2b-9dc1f5965536 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:03:41.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2250" for this suite. • [SLOW TEST:6.265 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":91,"skipped":1779,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:03:41.974: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:03:55.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9603" for this suite. • [SLOW TEST:13.257 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":92,"skipped":1807,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:03:55.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-6298bcd7-301a-45eb-941f-e17f8575f152 STEP: Creating a pod to test consume configMaps Jul 27 11:03:55.322: INFO: Waiting up to 5m0s for pod "pod-configmaps-cc7033e2-3692-4aca-8ff9-1dd373fa4641" in namespace "configmap-9667" to be "Succeeded or Failed" Jul 27 11:03:55.374: INFO: Pod "pod-configmaps-cc7033e2-3692-4aca-8ff9-1dd373fa4641": Phase="Pending", Reason="", readiness=false. Elapsed: 52.553935ms Jul 27 11:03:57.377: INFO: Pod "pod-configmaps-cc7033e2-3692-4aca-8ff9-1dd373fa4641": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05589133s Jul 27 11:03:59.493: INFO: Pod "pod-configmaps-cc7033e2-3692-4aca-8ff9-1dd373fa4641": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.171380841s STEP: Saw pod success Jul 27 11:03:59.493: INFO: Pod "pod-configmaps-cc7033e2-3692-4aca-8ff9-1dd373fa4641" satisfied condition "Succeeded or Failed" Jul 27 11:03:59.549: INFO: Trying to get logs from node kali-worker pod pod-configmaps-cc7033e2-3692-4aca-8ff9-1dd373fa4641 container configmap-volume-test: STEP: delete the pod Jul 27 11:03:59.564: INFO: Waiting for pod pod-configmaps-cc7033e2-3692-4aca-8ff9-1dd373fa4641 to disappear Jul 27 11:03:59.569: INFO: Pod pod-configmaps-cc7033e2-3692-4aca-8ff9-1dd373fa4641 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:03:59.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9667" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":93,"skipped":1826,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:03:59.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:04:03.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1917" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1865,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:04:03.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Jul 27 11:04:03.807: INFO: Waiting up to 5m0s for pod "var-expansion-12f422e8-dc23-4ecd-9cd8-570826056289" in namespace "var-expansion-6454" to be "Succeeded or Failed" Jul 27 11:04:03.809: INFO: Pod "var-expansion-12f422e8-dc23-4ecd-9cd8-570826056289": Phase="Pending", Reason="", readiness=false. Elapsed: 2.85419ms Jul 27 11:04:05.814: INFO: Pod "var-expansion-12f422e8-dc23-4ecd-9cd8-570826056289": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00699262s Jul 27 11:04:07.817: INFO: Pod "var-expansion-12f422e8-dc23-4ecd-9cd8-570826056289": Phase="Running", Reason="", readiness=true. Elapsed: 4.010173625s Jul 27 11:04:09.821: INFO: Pod "var-expansion-12f422e8-dc23-4ecd-9cd8-570826056289": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014353583s STEP: Saw pod success Jul 27 11:04:09.821: INFO: Pod "var-expansion-12f422e8-dc23-4ecd-9cd8-570826056289" satisfied condition "Succeeded or Failed" Jul 27 11:04:09.824: INFO: Trying to get logs from node kali-worker pod var-expansion-12f422e8-dc23-4ecd-9cd8-570826056289 container dapi-container: STEP: delete the pod Jul 27 11:04:09.843: INFO: Waiting for pod var-expansion-12f422e8-dc23-4ecd-9cd8-570826056289 to disappear Jul 27 11:04:09.847: INFO: Pod var-expansion-12f422e8-dc23-4ecd-9cd8-570826056289 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:04:09.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6454" for this suite. • [SLOW TEST:6.161 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":95,"skipped":1886,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:04:09.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-5712079c-81e8-4666-8ba0-46c465df94e7 STEP: Creating a pod to test consume secrets Jul 27 11:04:10.055: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-484fe6b9-70ee-4450-880c-0dfd4aa103fc" in namespace "projected-7023" to be "Succeeded or Failed" Jul 27 11:04:10.099: INFO: Pod "pod-projected-secrets-484fe6b9-70ee-4450-880c-0dfd4aa103fc": Phase="Pending", Reason="", readiness=false. Elapsed: 44.729905ms Jul 27 11:04:12.104: INFO: Pod "pod-projected-secrets-484fe6b9-70ee-4450-880c-0dfd4aa103fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048996358s Jul 27 11:04:14.109: INFO: Pod "pod-projected-secrets-484fe6b9-70ee-4450-880c-0dfd4aa103fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054817126s Jul 27 11:04:16.114: INFO: Pod "pod-projected-secrets-484fe6b9-70ee-4450-880c-0dfd4aa103fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.059241005s STEP: Saw pod success Jul 27 11:04:16.114: INFO: Pod "pod-projected-secrets-484fe6b9-70ee-4450-880c-0dfd4aa103fc" satisfied condition "Succeeded or Failed" Jul 27 11:04:16.117: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-484fe6b9-70ee-4450-880c-0dfd4aa103fc container projected-secret-volume-test: STEP: delete the pod Jul 27 11:04:16.160: INFO: Waiting for pod pod-projected-secrets-484fe6b9-70ee-4450-880c-0dfd4aa103fc to disappear Jul 27 11:04:16.217: INFO: Pod pod-projected-secrets-484fe6b9-70ee-4450-880c-0dfd4aa103fc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:04:16.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7023" for this suite. • [SLOW TEST:6.370 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":96,"skipped":1899,"failed":0} SS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:04:16.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 11:04:16.387: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:04:22.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1530" for this suite. • [SLOW TEST:6.221 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":97,"skipped":1901,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:04:22.446: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Jul 27 11:04:22.507: INFO: Waiting up to 5m0s for pod "pod-471e1f26-55b2-41bd-974e-0f1a283bcf12" in namespace "emptydir-3426" to be "Succeeded or Failed" Jul 27 11:04:22.510: INFO: Pod "pod-471e1f26-55b2-41bd-974e-0f1a283bcf12": Phase="Pending", Reason="", readiness=false. Elapsed: 3.414225ms Jul 27 11:04:24.514: INFO: Pod "pod-471e1f26-55b2-41bd-974e-0f1a283bcf12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007506256s Jul 27 11:04:26.518: INFO: Pod "pod-471e1f26-55b2-41bd-974e-0f1a283bcf12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010867476s STEP: Saw pod success Jul 27 11:04:26.518: INFO: Pod "pod-471e1f26-55b2-41bd-974e-0f1a283bcf12" satisfied condition "Succeeded or Failed" Jul 27 11:04:26.520: INFO: Trying to get logs from node kali-worker pod pod-471e1f26-55b2-41bd-974e-0f1a283bcf12 container test-container: STEP: delete the pod Jul 27 11:04:26.705: INFO: Waiting for pod pod-471e1f26-55b2-41bd-974e-0f1a283bcf12 to disappear Jul 27 11:04:26.729: INFO: Pod pod-471e1f26-55b2-41bd-974e-0f1a283bcf12 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:04:26.729: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3426" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1905,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:04:26.771: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 11:04:26.902: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jul 27 11:04:32.033: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jul 27 11:04:32.033: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jul 27 11:04:34.038: INFO: Creating deployment "test-rollover-deployment" Jul 27 11:04:34.116: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jul 27 11:04:36.123: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jul 27 11:04:36.126: INFO: Ensure that both replica sets have 1 created replica Jul 27 11:04:36.130: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jul 27 11:04:36.135: INFO: Updating deployment test-rollover-deployment Jul 27 11:04:36.135: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jul 27 11:04:38.167: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jul 27 11:04:38.174: INFO: Make sure deployment "test-rollover-deployment" is complete Jul 27 11:04:38.179: INFO: all replica sets need to contain the pod-template-hash label Jul 27 11:04:38.179: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444677, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 11:04:40.215: INFO: all replica sets need to contain the pod-template-hash label Jul 27 11:04:40.215: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444677, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 11:04:42.190: INFO: all replica sets need to contain the pod-template-hash label Jul 27 11:04:42.190: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444677, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 11:04:44.187: INFO: all replica sets need to contain the pod-template-hash label Jul 27 11:04:44.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444682, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 11:04:46.189: INFO: all replica sets need to contain the pod-template-hash label Jul 27 11:04:46.189: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444682, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 11:04:48.188: INFO: all replica sets need to contain the pod-template-hash label Jul 27 11:04:48.188: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444682, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 11:04:50.187: INFO: all replica sets need to contain the pod-template-hash label Jul 27 11:04:50.187: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444682, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 11:04:52.189: INFO: all replica sets need to contain the pod-template-hash label Jul 27 11:04:52.189: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444682, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444674, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)} Jul 27 11:04:54.188: INFO: Jul 27 11:04:54.188: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Jul 27 11:04:54.197: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6546 /apis/apps/v1/namespaces/deployment-6546/deployments/test-rollover-deployment d43a97b7-91fd-4f59-be4e-b2953cc458d1 4554450 2 2020-07-27 11:04:34 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-07-27 11:04:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-27 11:04:52 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0023db4f8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-27 11:04:34 +0000 UTC,LastTransitionTime:2020-07-27 11:04:34 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-07-27 11:04:52 +0000 UTC,LastTransitionTime:2020-07-27 11:04:34 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} Jul 27 11:04:54.200: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b deployment-6546 /apis/apps/v1/namespaces/deployment-6546/replicasets/test-rollover-deployment-84f7f6f64b b7027eb2-51eb-41e0-b4c2-7a32747a5433 4554439 2 2020-07-27 11:04:36 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment d43a97b7-91fd-4f59-be4e-b2953cc458d1 0xc0023dbd77 0xc0023dbd78}] [] [{kube-controller-manager Update apps/v1 2020-07-27 11:04:52 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 51 97 57 55 98 55 45 57 49 102 100 45 52 102 53 57 45 98 101 52 101 45 98 50 57 53 51 99 99 52 53 56 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0023dbe08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Jul 27 11:04:54.200: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jul 27 11:04:54.200: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6546 /apis/apps/v1/namespaces/deployment-6546/replicasets/test-rollover-controller 92ccab77-39de-499c-9663-354d953b8e0e 4554448 2 2020-07-27 11:04:26 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment d43a97b7-91fd-4f59-be4e-b2953cc458d1 0xc0023dbb5f 0xc0023dbb70}] [] [{e2e.test Update apps/v1 2020-07-27 11:04:26 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-27 11:04:52 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 51 97 57 55 98 55 45 57 49 102 100 45 52 102 53 57 45 98 101 52 101 45 98 50 57 53 51 99 99 52 53 56 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0023dbc08 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 27 11:04:54.201: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5 deployment-6546 /apis/apps/v1/namespaces/deployment-6546/replicasets/test-rollover-deployment-5686c4cfd5 e403aee9-e663-4070-ae50-42cae890bbde 4554381 2 2020-07-27 11:04:34 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment d43a97b7-91fd-4f59-be4e-b2953cc458d1 0xc0023dbc77 0xc0023dbc78}] [] [{kube-controller-manager Update apps/v1 2020-07-27 11:04:37 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 100 52 51 97 57 55 98 55 45 57 49 102 100 45 52 102 53 57 45 98 101 52 101 45 98 50 57 53 51 99 99 52 53 56 100 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0023dbd08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Jul 27 11:04:54.204: INFO: Pod "test-rollover-deployment-84f7f6f64b-tqtsh" is available: &Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-tqtsh test-rollover-deployment-84f7f6f64b- deployment-6546 /api/v1/namespaces/deployment-6546/pods/test-rollover-deployment-84f7f6f64b-tqtsh 79d23903-8dee-416a-a02e-f4e1fbb2afad 4554405 0 2020-07-27 11:04:36 +0000 UTC map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b b7027eb2-51eb-41e0-b4c2-7a32747a5433 0xc002c6b9e7 0xc002c6b9e8}] [] [{kube-controller-manager Update v1 2020-07-27 11:04:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 55 48 50 55 101 98 50 45 53 49 101 98 45 52 49 101 48 45 98 52 99 50 45 55 97 51 50 55 52 55 97 53 52 51 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:04:42 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-4kbnk,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-4kbnk,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-4kbnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:04:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:04:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:04:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:04:37 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.14,StartTime:2020-07-27 11:04:37 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-27 11:04:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://ff537b73210528b3999eea6ef398be105954c44c67e088085128357b5c9dea05,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:04:54.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6546" for this suite. • [SLOW TEST:27.441 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":99,"skipped":1936,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:04:54.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:04:54.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9203" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":100,"skipped":1947,"failed":0} ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:04:54.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Jul 27 11:04:55.105: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Jul 27 11:04:57.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444695, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444695, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444695, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444695, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 27 11:05:00.189: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 11:05:00.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:05:01.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-1737" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:7.371 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":101,"skipped":1947,"failed":0} [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:05:01.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Jul 27 11:05:01.724: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:05:07.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9209" for this suite. • [SLOW TEST:5.931 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":102,"skipped":1947,"failed":0} S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:05:07.577: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 27 11:05:16.048: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 27 11:05:16.058: INFO: Pod pod-with-prestop-http-hook still exists Jul 27 11:05:18.058: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 27 11:05:18.093: INFO: Pod pod-with-prestop-http-hook still exists Jul 27 11:05:20.058: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 27 11:05:20.062: INFO: Pod pod-with-prestop-http-hook still exists Jul 27 11:05:22.058: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 27 11:05:22.062: INFO: Pod pod-with-prestop-http-hook still exists Jul 27 11:05:24.058: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jul 27 11:05:24.062: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:05:24.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8040" for this suite. • [SLOW TEST:16.501 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":103,"skipped":1948,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:05:24.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 27 11:05:24.175: INFO: Waiting up to 5m0s for pod "downwardapi-volume-78449a8c-9ef2-4c56-be5c-4310dab70489" in namespace "downward-api-8109" to be "Succeeded or Failed" Jul 27 11:05:24.184: INFO: Pod "downwardapi-volume-78449a8c-9ef2-4c56-be5c-4310dab70489": Phase="Pending", Reason="", readiness=false. Elapsed: 9.235677ms Jul 27 11:05:26.188: INFO: Pod "downwardapi-volume-78449a8c-9ef2-4c56-be5c-4310dab70489": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013637164s Jul 27 11:05:28.193: INFO: Pod "downwardapi-volume-78449a8c-9ef2-4c56-be5c-4310dab70489": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01826885s STEP: Saw pod success Jul 27 11:05:28.193: INFO: Pod "downwardapi-volume-78449a8c-9ef2-4c56-be5c-4310dab70489" satisfied condition "Succeeded or Failed" Jul 27 11:05:28.196: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-78449a8c-9ef2-4c56-be5c-4310dab70489 container client-container: STEP: delete the pod Jul 27 11:05:28.255: INFO: Waiting for pod downwardapi-volume-78449a8c-9ef2-4c56-be5c-4310dab70489 to disappear Jul 27 11:05:28.262: INFO: Pod downwardapi-volume-78449a8c-9ef2-4c56-be5c-4310dab70489 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:05:28.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8109" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":104,"skipped":1969,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:05:28.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-b9d5dda1-74b0-4c69-9101-bac7e0241346 STEP: Creating a pod to test consume secrets Jul 27 11:05:28.338: INFO: Waiting up to 5m0s for pod "pod-secrets-14ae4d47-bba8-4197-84ec-e7c16564a0df" in namespace "secrets-7585" to be "Succeeded or Failed" Jul 27 11:05:28.394: INFO: Pod "pod-secrets-14ae4d47-bba8-4197-84ec-e7c16564a0df": Phase="Pending", Reason="", readiness=false. Elapsed: 56.283764ms Jul 27 11:05:30.428: INFO: Pod "pod-secrets-14ae4d47-bba8-4197-84ec-e7c16564a0df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.090035238s Jul 27 11:05:32.432: INFO: Pod "pod-secrets-14ae4d47-bba8-4197-84ec-e7c16564a0df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.093869733s STEP: Saw pod success Jul 27 11:05:32.432: INFO: Pod "pod-secrets-14ae4d47-bba8-4197-84ec-e7c16564a0df" satisfied condition "Succeeded or Failed" Jul 27 11:05:32.435: INFO: Trying to get logs from node kali-worker pod pod-secrets-14ae4d47-bba8-4197-84ec-e7c16564a0df container secret-volume-test: STEP: delete the pod Jul 27 11:05:32.463: INFO: Waiting for pod pod-secrets-14ae4d47-bba8-4197-84ec-e7c16564a0df to disappear Jul 27 11:05:32.481: INFO: Pod pod-secrets-14ae4d47-bba8-4197-84ec-e7c16564a0df no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:05:32.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7585" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":105,"skipped":1972,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:05:32.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jul 27 11:05:41.097: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 27 11:05:41.162: INFO: Pod pod-with-poststart-http-hook still exists Jul 27 11:05:43.162: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 27 11:05:43.167: INFO: Pod pod-with-poststart-http-hook still exists Jul 27 11:05:45.162: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jul 27 11:05:45.166: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:05:45.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-163" for this suite. • [SLOW TEST:12.534 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":1992,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:05:45.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6543.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6543.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6543.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6543.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6543.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6543.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jul 27 11:05:54.025: INFO: DNS probes using dns-6543/dns-test-ff620850-4ca6-4705-a832-55b2beca740c succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:05:54.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6543" for this suite. • [SLOW TEST:9.108 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":107,"skipped":2002,"failed":0} SSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:05:54.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Jul 27 11:05:54.903: INFO: Waiting up to 5m0s for pod "var-expansion-ccaa67c3-1f9e-4e99-8f5d-2db3b098cf44" in namespace "var-expansion-1849" to be "Succeeded or Failed" Jul 27 11:05:54.935: INFO: Pod "var-expansion-ccaa67c3-1f9e-4e99-8f5d-2db3b098cf44": Phase="Pending", Reason="", readiness=false. Elapsed: 32.562029ms Jul 27 11:05:57.364: INFO: Pod "var-expansion-ccaa67c3-1f9e-4e99-8f5d-2db3b098cf44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.461313516s Jul 27 11:05:59.573: INFO: Pod "var-expansion-ccaa67c3-1f9e-4e99-8f5d-2db3b098cf44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.670256799s Jul 27 11:06:01.577: INFO: Pod "var-expansion-ccaa67c3-1f9e-4e99-8f5d-2db3b098cf44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.674856319s STEP: Saw pod success Jul 27 11:06:01.578: INFO: Pod "var-expansion-ccaa67c3-1f9e-4e99-8f5d-2db3b098cf44" satisfied condition "Succeeded or Failed" Jul 27 11:06:01.581: INFO: Trying to get logs from node kali-worker2 pod var-expansion-ccaa67c3-1f9e-4e99-8f5d-2db3b098cf44 container dapi-container: STEP: delete the pod Jul 27 11:06:01.619: INFO: Waiting for pod var-expansion-ccaa67c3-1f9e-4e99-8f5d-2db3b098cf44 to disappear Jul 27 11:06:01.643: INFO: Pod var-expansion-ccaa67c3-1f9e-4e99-8f5d-2db3b098cf44 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:06:01.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1849" for this suite. • [SLOW TEST:7.378 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":2008,"failed":0} SSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:06:01.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:06:01.810: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9204" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":109,"skipped":2011,"failed":0} SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:06:01.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jul 27 11:06:09.998: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 27 11:06:10.060: INFO: Pod pod-with-prestop-exec-hook still exists Jul 27 11:06:12.060: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 27 11:06:12.064: INFO: Pod pod-with-prestop-exec-hook still exists Jul 27 11:06:14.060: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 27 11:06:14.065: INFO: Pod pod-with-prestop-exec-hook still exists Jul 27 11:06:16.060: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 27 11:06:16.065: INFO: Pod pod-with-prestop-exec-hook still exists Jul 27 11:06:18.060: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 27 11:06:18.064: INFO: Pod pod-with-prestop-exec-hook still exists Jul 27 11:06:20.060: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 27 11:06:20.393: INFO: Pod pod-with-prestop-exec-hook still exists Jul 27 11:06:22.060: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 27 11:06:22.065: INFO: Pod pod-with-prestop-exec-hook still exists Jul 27 11:06:24.060: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jul 27 11:06:24.065: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:06:24.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2124" for this suite. • [SLOW TEST:22.266 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":110,"skipped":2013,"failed":0} S ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:06:24.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jul 27 11:06:25.262: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jul 27 11:06:27.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444785, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444785, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444785, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444785, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jul 27 11:06:30.418: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:06:30.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4194" for this suite. STEP: Destroying namespace "webhook-4194-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.985 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":111,"skipped":2014,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:06:31.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Jul 27 11:06:31.206: INFO: Waiting up to 5m0s for pod "downward-api-e7822af3-7694-4072-a2c4-ed10f9e88bcf" in namespace "downward-api-4588" to be "Succeeded or Failed" Jul 27 11:06:31.217: INFO: Pod "downward-api-e7822af3-7694-4072-a2c4-ed10f9e88bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 11.458118ms Jul 27 11:06:33.297: INFO: Pod "downward-api-e7822af3-7694-4072-a2c4-ed10f9e88bcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091120513s Jul 27 11:06:35.301: INFO: Pod "downward-api-e7822af3-7694-4072-a2c4-ed10f9e88bcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.095056171s STEP: Saw pod success Jul 27 11:06:35.301: INFO: Pod "downward-api-e7822af3-7694-4072-a2c4-ed10f9e88bcf" satisfied condition "Succeeded or Failed" Jul 27 11:06:35.304: INFO: Trying to get logs from node kali-worker2 pod downward-api-e7822af3-7694-4072-a2c4-ed10f9e88bcf container dapi-container: STEP: delete the pod Jul 27 11:06:35.357: INFO: Waiting for pod downward-api-e7822af3-7694-4072-a2c4-ed10f9e88bcf to disappear Jul 27 11:06:35.386: INFO: Pod downward-api-e7822af3-7694-4072-a2c4-ed10f9e88bcf no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:06:35.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4588" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":2036,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:06:35.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-6122be73-2c18-4d55-815a-bccb8a5e75cb STEP: Creating a pod to test consume secrets Jul 27 11:06:35.478: INFO: Waiting up to 5m0s for pod "pod-secrets-9962c611-205e-4995-962e-7392136c584c" in namespace "secrets-2508" to be "Succeeded or Failed" Jul 27 11:06:35.518: INFO: Pod "pod-secrets-9962c611-205e-4995-962e-7392136c584c": Phase="Pending", Reason="", readiness=false. Elapsed: 39.798267ms Jul 27 11:06:37.532: INFO: Pod "pod-secrets-9962c611-205e-4995-962e-7392136c584c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053588347s Jul 27 11:06:39.536: INFO: Pod "pod-secrets-9962c611-205e-4995-962e-7392136c584c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05764545s STEP: Saw pod success Jul 27 11:06:39.536: INFO: Pod "pod-secrets-9962c611-205e-4995-962e-7392136c584c" satisfied condition "Succeeded or Failed" Jul 27 11:06:39.539: INFO: Trying to get logs from node kali-worker pod pod-secrets-9962c611-205e-4995-962e-7392136c584c container secret-env-test: STEP: delete the pod Jul 27 11:06:39.578: INFO: Waiting for pod pod-secrets-9962c611-205e-4995-962e-7392136c584c to disappear Jul 27 11:06:39.590: INFO: Pod pod-secrets-9962c611-205e-4995-962e-7392136c584c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:06:39.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2508" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":2082,"failed":0} SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:06:39.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Jul 27 11:06:39.681: INFO: PodSpec: initContainers in spec.initContainers Jul 27 11:07:29.893: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-461815ba-3e5e-4fab-9bba-1f2489d13689", GenerateName:"", Namespace:"init-container-8853", SelfLink:"/api/v1/namespaces/init-container-8853/pods/pod-init-461815ba-3e5e-4fab-9bba-1f2489d13689", UID:"ee803a41-e938-457a-817d-01f35bc7d850", ResourceVersion:"4555474", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63731444799, loc:(*time.Location)(0x7b220e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"681897632"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00298c320), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00298c340)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00298c360), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00298c380)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-6fmgj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0025d3d40), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6fmgj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6fmgj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-6fmgj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00246c908), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000ba65b0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00246cae0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00246cb00)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00246cb08), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00246cb0c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444799, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444799, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444799, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731444799, loc:(*time.Location)(0x7b220e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.15", PodIP:"10.244.1.23", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.23"}}, StartTime:(*v1.Time)(0xc00298c3a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00298c3e0), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000ba6700)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://022e84e763defa55d21e3bcbb9c4147ccd0edb746cf62984ccbcdf8df8f46832", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00298c4e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00298c3c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00246cbdf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:07:29.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8853" for this suite. • [SLOW TEST:50.377 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":114,"skipped":2084,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:07:29.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 27 11:07:30.222: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76bde0af-e2ac-447c-9d91-8840b1e56b15" in namespace "projected-394" to be "Succeeded or Failed" Jul 27 11:07:30.230: INFO: Pod "downwardapi-volume-76bde0af-e2ac-447c-9d91-8840b1e56b15": Phase="Pending", Reason="", readiness=false. Elapsed: 8.641252ms Jul 27 11:07:32.315: INFO: Pod "downwardapi-volume-76bde0af-e2ac-447c-9d91-8840b1e56b15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093343915s Jul 27 11:07:34.319: INFO: Pod "downwardapi-volume-76bde0af-e2ac-447c-9d91-8840b1e56b15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097191231s STEP: Saw pod success Jul 27 11:07:34.319: INFO: Pod "downwardapi-volume-76bde0af-e2ac-447c-9d91-8840b1e56b15" satisfied condition "Succeeded or Failed" Jul 27 11:07:34.322: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-76bde0af-e2ac-447c-9d91-8840b1e56b15 container client-container: STEP: delete the pod Jul 27 11:07:34.382: INFO: Waiting for pod downwardapi-volume-76bde0af-e2ac-447c-9d91-8840b1e56b15 to disappear Jul 27 11:07:34.388: INFO: Pod downwardapi-volume-76bde0af-e2ac-447c-9d91-8840b1e56b15 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:07:34.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-394" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":115,"skipped":2121,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:07:34.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Jul 27 11:07:34.528: INFO: Waiting up to 5m0s for pod "downwardapi-volume-22d97b99-1108-4ecb-8915-2f74d8de9fec" in namespace "downward-api-8992" to be "Succeeded or Failed" Jul 27 11:07:34.590: INFO: Pod "downwardapi-volume-22d97b99-1108-4ecb-8915-2f74d8de9fec": Phase="Pending", Reason="", readiness=false. Elapsed: 62.686242ms Jul 27 11:07:36.604: INFO: Pod "downwardapi-volume-22d97b99-1108-4ecb-8915-2f74d8de9fec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076052282s Jul 27 11:07:38.610: INFO: Pod "downwardapi-volume-22d97b99-1108-4ecb-8915-2f74d8de9fec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082120844s STEP: Saw pod success Jul 27 11:07:38.610: INFO: Pod "downwardapi-volume-22d97b99-1108-4ecb-8915-2f74d8de9fec" satisfied condition "Succeeded or Failed" Jul 27 11:07:38.612: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-22d97b99-1108-4ecb-8915-2f74d8de9fec container client-container: STEP: delete the pod Jul 27 11:07:38.644: INFO: Waiting for pod downwardapi-volume-22d97b99-1108-4ecb-8915-2f74d8de9fec to disappear Jul 27 11:07:38.657: INFO: Pod downwardapi-volume-22d97b99-1108-4ecb-8915-2f74d8de9fec no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Jul 27 11:07:38.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8992" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":2169,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Jul 27 11:07:38.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Jul 27 11:07:39.001: INFO: (0) /api/v1/nodes/kali-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2699.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2699.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 27 11:07:45.235: INFO: DNS probes using dns-2699/dns-test-ef98ca59-ca87-4861-a9e7-5af73e13a472 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:07:45.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2699" for this suite.

• [SLOW TEST:6.198 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":275,"completed":118,"skipped":2257,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:07:45.292: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:07:50.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4541" for this suite.

• [SLOW TEST:5.644 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":119,"skipped":2265,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:07:50.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service multi-endpoint-test in namespace services-4884
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4884 to expose endpoints map[]
Jul 27 11:07:51.197: INFO: successfully validated that service multi-endpoint-test in namespace services-4884 exposes endpoints map[] (44.060578ms elapsed)
STEP: Creating pod pod1 in namespace services-4884
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4884 to expose endpoints map[pod1:[100]]
Jul 27 11:07:55.492: INFO: successfully validated that service multi-endpoint-test in namespace services-4884 exposes endpoints map[pod1:[100]] (4.22468338s elapsed)
STEP: Creating pod pod2 in namespace services-4884
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4884 to expose endpoints map[pod1:[100] pod2:[101]]
Jul 27 11:07:59.649: INFO: successfully validated that service multi-endpoint-test in namespace services-4884 exposes endpoints map[pod1:[100] pod2:[101]] (4.151738485s elapsed)
STEP: Deleting pod pod1 in namespace services-4884
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4884 to expose endpoints map[pod2:[101]]
Jul 27 11:08:00.684: INFO: successfully validated that service multi-endpoint-test in namespace services-4884 exposes endpoints map[pod2:[101]] (1.030327344s elapsed)
STEP: Deleting pod pod2 in namespace services-4884
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4884 to expose endpoints map[]
Jul 27 11:08:01.698: INFO: successfully validated that service multi-endpoint-test in namespace services-4884 exposes endpoints map[] (1.00900555s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:08:01.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4884" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:11.032 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":275,"completed":120,"skipped":2283,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:08:01.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-6836
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6836 to expose endpoints map[]
Jul 27 11:08:02.428: INFO: Get endpoints failed (8.300007ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jul 27 11:08:03.434: INFO: successfully validated that service endpoint-test2 in namespace services-6836 exposes endpoints map[] (1.014655967s elapsed)
STEP: Creating pod pod1 in namespace services-6836
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6836 to expose endpoints map[pod1:[80]]
Jul 27 11:08:07.540: INFO: successfully validated that service endpoint-test2 in namespace services-6836 exposes endpoints map[pod1:[80]] (4.076180345s elapsed)
STEP: Creating pod pod2 in namespace services-6836
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6836 to expose endpoints map[pod1:[80] pod2:[80]]
Jul 27 11:08:11.646: INFO: successfully validated that service endpoint-test2 in namespace services-6836 exposes endpoints map[pod1:[80] pod2:[80]] (4.100490323s elapsed)
STEP: Deleting pod pod1 in namespace services-6836
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6836 to expose endpoints map[pod2:[80]]
Jul 27 11:08:12.734: INFO: successfully validated that service endpoint-test2 in namespace services-6836 exposes endpoints map[pod2:[80]] (1.08297209s elapsed)
STEP: Deleting pod pod2 in namespace services-6836
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6836 to expose endpoints map[]
Jul 27 11:08:13.952: INFO: successfully validated that service endpoint-test2 in namespace services-6836 exposes endpoints map[] (1.213611585s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:08:13.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6836" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:12.029 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":121,"skipped":2289,"failed":0}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:08:13.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-58ca23f7-486e-427e-ae4c-82f60f8206ae
STEP: Creating a pod to test consume secrets
Jul 27 11:08:14.047: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dc610df6-03c4-4122-ae10-e80357507500" in namespace "projected-7496" to be "Succeeded or Failed"
Jul 27 11:08:14.095: INFO: Pod "pod-projected-secrets-dc610df6-03c4-4122-ae10-e80357507500": Phase="Pending", Reason="", readiness=false. Elapsed: 47.311002ms
Jul 27 11:08:16.099: INFO: Pod "pod-projected-secrets-dc610df6-03c4-4122-ae10-e80357507500": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051500774s
Jul 27 11:08:18.103: INFO: Pod "pod-projected-secrets-dc610df6-03c4-4122-ae10-e80357507500": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055285594s
STEP: Saw pod success
Jul 27 11:08:18.103: INFO: Pod "pod-projected-secrets-dc610df6-03c4-4122-ae10-e80357507500" satisfied condition "Succeeded or Failed"
Jul 27 11:08:18.106: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-dc610df6-03c4-4122-ae10-e80357507500 container projected-secret-volume-test: 
STEP: delete the pod
Jul 27 11:08:18.215: INFO: Waiting for pod pod-projected-secrets-dc610df6-03c4-4122-ae10-e80357507500 to disappear
Jul 27 11:08:18.227: INFO: Pod pod-projected-secrets-dc610df6-03c4-4122-ae10-e80357507500 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:08:18.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7496" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":122,"skipped":2293,"failed":0}
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:08:18.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-6444
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 27 11:08:18.274: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jul 27 11:08:18.423: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 27 11:08:20.522: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 27 11:08:22.447: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 27 11:08:24.427: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:08:26.427: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:08:28.426: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:08:30.429: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:08:32.428: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:08:34.427: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:08:36.427: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jul 27 11:08:36.432: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jul 27 11:08:40.459: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.29:8080/dial?request=hostname&protocol=udp&host=10.244.2.173&port=8081&tries=1'] Namespace:pod-network-test-6444 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 27 11:08:40.459: INFO: >>> kubeConfig: /root/.kube/config
I0727 11:08:40.495882       7 log.go:172] (0xc005e7e370) (0xc0011908c0) Create stream
I0727 11:08:40.495911       7 log.go:172] (0xc005e7e370) (0xc0011908c0) Stream added, broadcasting: 1
I0727 11:08:40.498318       7 log.go:172] (0xc005e7e370) Reply frame received for 1
I0727 11:08:40.498360       7 log.go:172] (0xc005e7e370) (0xc00096edc0) Create stream
I0727 11:08:40.498376       7 log.go:172] (0xc005e7e370) (0xc00096edc0) Stream added, broadcasting: 3
I0727 11:08:40.499224       7 log.go:172] (0xc005e7e370) Reply frame received for 3
I0727 11:08:40.499283       7 log.go:172] (0xc005e7e370) (0xc0024801e0) Create stream
I0727 11:08:40.499302       7 log.go:172] (0xc005e7e370) (0xc0024801e0) Stream added, broadcasting: 5
I0727 11:08:40.500326       7 log.go:172] (0xc005e7e370) Reply frame received for 5
I0727 11:08:40.588350       7 log.go:172] (0xc005e7e370) Data frame received for 3
I0727 11:08:40.588370       7 log.go:172] (0xc00096edc0) (3) Data frame handling
I0727 11:08:40.588380       7 log.go:172] (0xc00096edc0) (3) Data frame sent
I0727 11:08:40.589068       7 log.go:172] (0xc005e7e370) Data frame received for 5
I0727 11:08:40.589096       7 log.go:172] (0xc0024801e0) (5) Data frame handling
I0727 11:08:40.589121       7 log.go:172] (0xc005e7e370) Data frame received for 3
I0727 11:08:40.589133       7 log.go:172] (0xc00096edc0) (3) Data frame handling
I0727 11:08:40.590639       7 log.go:172] (0xc005e7e370) Data frame received for 1
I0727 11:08:40.590660       7 log.go:172] (0xc0011908c0) (1) Data frame handling
I0727 11:08:40.590685       7 log.go:172] (0xc0011908c0) (1) Data frame sent
I0727 11:08:40.590700       7 log.go:172] (0xc005e7e370) (0xc0011908c0) Stream removed, broadcasting: 1
I0727 11:08:40.590748       7 log.go:172] (0xc005e7e370) Go away received
I0727 11:08:40.590798       7 log.go:172] (0xc005e7e370) (0xc0011908c0) Stream removed, broadcasting: 1
I0727 11:08:40.590813       7 log.go:172] (0xc005e7e370) (0xc00096edc0) Stream removed, broadcasting: 3
I0727 11:08:40.590827       7 log.go:172] (0xc005e7e370) (0xc0024801e0) Stream removed, broadcasting: 5
Jul 27 11:08:40.590: INFO: Waiting for responses: map[]
Jul 27 11:08:40.593: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.29:8080/dial?request=hostname&protocol=udp&host=10.244.1.28&port=8081&tries=1'] Namespace:pod-network-test-6444 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 27 11:08:40.593: INFO: >>> kubeConfig: /root/.kube/config
I0727 11:08:40.625694       7 log.go:172] (0xc002db2630) (0xc002480fa0) Create stream
I0727 11:08:40.625722       7 log.go:172] (0xc002db2630) (0xc002480fa0) Stream added, broadcasting: 1
I0727 11:08:40.629524       7 log.go:172] (0xc002db2630) Reply frame received for 1
I0727 11:08:40.629587       7 log.go:172] (0xc002db2630) (0xc002481040) Create stream
I0727 11:08:40.629620       7 log.go:172] (0xc002db2630) (0xc002481040) Stream added, broadcasting: 3
I0727 11:08:40.632248       7 log.go:172] (0xc002db2630) Reply frame received for 3
I0727 11:08:40.632293       7 log.go:172] (0xc002db2630) (0xc000e94780) Create stream
I0727 11:08:40.632307       7 log.go:172] (0xc002db2630) (0xc000e94780) Stream added, broadcasting: 5
I0727 11:08:40.633248       7 log.go:172] (0xc002db2630) Reply frame received for 5
I0727 11:08:40.707480       7 log.go:172] (0xc002db2630) Data frame received for 3
I0727 11:08:40.707504       7 log.go:172] (0xc002481040) (3) Data frame handling
I0727 11:08:40.707518       7 log.go:172] (0xc002481040) (3) Data frame sent
I0727 11:08:40.708135       7 log.go:172] (0xc002db2630) Data frame received for 5
I0727 11:08:40.708162       7 log.go:172] (0xc000e94780) (5) Data frame handling
I0727 11:08:40.708181       7 log.go:172] (0xc002db2630) Data frame received for 3
I0727 11:08:40.708193       7 log.go:172] (0xc002481040) (3) Data frame handling
I0727 11:08:40.710421       7 log.go:172] (0xc002db2630) Data frame received for 1
I0727 11:08:40.710457       7 log.go:172] (0xc002480fa0) (1) Data frame handling
I0727 11:08:40.710493       7 log.go:172] (0xc002480fa0) (1) Data frame sent
I0727 11:08:40.710516       7 log.go:172] (0xc002db2630) (0xc002480fa0) Stream removed, broadcasting: 1
I0727 11:08:40.710625       7 log.go:172] (0xc002db2630) (0xc002480fa0) Stream removed, broadcasting: 1
I0727 11:08:40.710657       7 log.go:172] (0xc002db2630) (0xc002481040) Stream removed, broadcasting: 3
I0727 11:08:40.710674       7 log.go:172] (0xc002db2630) (0xc000e94780) Stream removed, broadcasting: 5
I0727 11:08:40.710696       7 log.go:172] (0xc002db2630) Go away received
Jul 27 11:08:40.710: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:08:40.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6444" for this suite.

• [SLOW TEST:22.484 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2297,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:08:40.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-dc1aae46-6e1c-48ed-bce8-9cd8e3b57678
STEP: Creating a pod to test consume configMaps
Jul 27 11:08:40.793: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-46b171f5-619a-4c45-aac4-c39f79779891" in namespace "projected-9243" to be "Succeeded or Failed"
Jul 27 11:08:40.797: INFO: Pod "pod-projected-configmaps-46b171f5-619a-4c45-aac4-c39f79779891": Phase="Pending", Reason="", readiness=false. Elapsed: 3.689589ms
Jul 27 11:08:42.801: INFO: Pod "pod-projected-configmaps-46b171f5-619a-4c45-aac4-c39f79779891": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007837225s
Jul 27 11:08:44.806: INFO: Pod "pod-projected-configmaps-46b171f5-619a-4c45-aac4-c39f79779891": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012648898s
STEP: Saw pod success
Jul 27 11:08:44.806: INFO: Pod "pod-projected-configmaps-46b171f5-619a-4c45-aac4-c39f79779891" satisfied condition "Succeeded or Failed"
Jul 27 11:08:44.809: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-46b171f5-619a-4c45-aac4-c39f79779891 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 27 11:08:44.880: INFO: Waiting for pod pod-projected-configmaps-46b171f5-619a-4c45-aac4-c39f79779891 to disappear
Jul 27 11:08:44.887: INFO: Pod pod-projected-configmaps-46b171f5-619a-4c45-aac4-c39f79779891 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:08:44.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9243" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2331,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:08:44.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name projected-secret-test-00b0459e-e124-4057-800c-4c216619f9dd
STEP: Creating a pod to test consume secrets
Jul 27 11:08:44.967: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-15c29f9a-2ce2-4870-94c8-6171b73b7f60" in namespace "projected-125" to be "Succeeded or Failed"
Jul 27 11:08:45.028: INFO: Pod "pod-projected-secrets-15c29f9a-2ce2-4870-94c8-6171b73b7f60": Phase="Pending", Reason="", readiness=false. Elapsed: 61.107374ms
Jul 27 11:08:47.031: INFO: Pod "pod-projected-secrets-15c29f9a-2ce2-4870-94c8-6171b73b7f60": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064232077s
Jul 27 11:08:49.035: INFO: Pod "pod-projected-secrets-15c29f9a-2ce2-4870-94c8-6171b73b7f60": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067917544s
Jul 27 11:08:51.039: INFO: Pod "pod-projected-secrets-15c29f9a-2ce2-4870-94c8-6171b73b7f60": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072205453s
STEP: Saw pod success
Jul 27 11:08:51.039: INFO: Pod "pod-projected-secrets-15c29f9a-2ce2-4870-94c8-6171b73b7f60" satisfied condition "Succeeded or Failed"
Jul 27 11:08:51.042: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-15c29f9a-2ce2-4870-94c8-6171b73b7f60 container secret-volume-test: 
STEP: delete the pod
Jul 27 11:08:51.061: INFO: Waiting for pod pod-projected-secrets-15c29f9a-2ce2-4870-94c8-6171b73b7f60 to disappear
Jul 27 11:08:51.159: INFO: Pod pod-projected-secrets-15c29f9a-2ce2-4870-94c8-6171b73b7f60 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:08:51.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-125" for this suite.

• [SLOW TEST:6.268 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":125,"skipped":2407,"failed":0}
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:08:51.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:08:51.258: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jul 27 11:08:51.338: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:08:51.363: INFO: Number of nodes with available pods: 0
Jul 27 11:08:51.363: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:08:52.369: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:08:52.373: INFO: Number of nodes with available pods: 0
Jul 27 11:08:52.373: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:08:53.527: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:08:53.610: INFO: Number of nodes with available pods: 0
Jul 27 11:08:53.610: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:08:54.368: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:08:54.371: INFO: Number of nodes with available pods: 0
Jul 27 11:08:54.371: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:08:55.483: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:08:55.486: INFO: Number of nodes with available pods: 1
Jul 27 11:08:55.486: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:08:56.367: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:08:56.369: INFO: Number of nodes with available pods: 2
Jul 27 11:08:56.369: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jul 27 11:08:56.497: INFO: Wrong image for pod: daemon-set-f5mg2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:08:56.497: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:08:56.607: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:08:57.814: INFO: Wrong image for pod: daemon-set-f5mg2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:08:57.814: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:08:57.818: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:08:58.612: INFO: Wrong image for pod: daemon-set-f5mg2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:08:58.612: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:08:58.616: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:08:59.615: INFO: Wrong image for pod: daemon-set-f5mg2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:08:59.615: INFO: Pod daemon-set-f5mg2 is not available
Jul 27 11:08:59.615: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:08:59.618: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:00.611: INFO: Wrong image for pod: daemon-set-f5mg2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:00.611: INFO: Pod daemon-set-f5mg2 is not available
Jul 27 11:09:00.611: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:00.614: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:01.612: INFO: Wrong image for pod: daemon-set-f5mg2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:01.612: INFO: Pod daemon-set-f5mg2 is not available
Jul 27 11:09:01.612: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:01.615: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:02.612: INFO: Wrong image for pod: daemon-set-f5mg2. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:02.612: INFO: Pod daemon-set-f5mg2 is not available
Jul 27 11:09:02.612: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:02.615: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:03.622: INFO: Pod daemon-set-gfklv is not available
Jul 27 11:09:03.622: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:03.625: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:04.612: INFO: Pod daemon-set-gfklv is not available
Jul 27 11:09:04.612: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:04.616: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:05.612: INFO: Pod daemon-set-gfklv is not available
Jul 27 11:09:05.612: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:05.616: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:06.844: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:06.848: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:07.611: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:07.616: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:08.645: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:08.646: INFO: Pod daemon-set-t4s5v is not available
Jul 27 11:09:08.650: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:09.611: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:09.611: INFO: Pod daemon-set-t4s5v is not available
Jul 27 11:09:09.615: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:10.612: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:10.612: INFO: Pod daemon-set-t4s5v is not available
Jul 27 11:09:10.616: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:11.611: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:11.611: INFO: Pod daemon-set-t4s5v is not available
Jul 27 11:09:11.615: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:12.611: INFO: Wrong image for pod: daemon-set-t4s5v. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine.
Jul 27 11:09:12.611: INFO: Pod daemon-set-t4s5v is not available
Jul 27 11:09:12.615: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:13.612: INFO: Pod daemon-set-vw5fx is not available
Jul 27 11:09:13.616: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Jul 27 11:09:13.620: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:13.624: INFO: Number of nodes with available pods: 1
Jul 27 11:09:13.624: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:09:14.664: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:14.667: INFO: Number of nodes with available pods: 1
Jul 27 11:09:14.667: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:09:15.629: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:15.633: INFO: Number of nodes with available pods: 1
Jul 27 11:09:15.633: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:09:16.629: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:16.631: INFO: Number of nodes with available pods: 1
Jul 27 11:09:16.631: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:09:17.629: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:09:17.634: INFO: Number of nodes with available pods: 2
Jul 27 11:09:17.634: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7444, will wait for the garbage collector to delete the pods
Jul 27 11:09:17.706: INFO: Deleting DaemonSet.extensions daemon-set took: 6.422101ms
Jul 27 11:09:18.006: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.276092ms
Jul 27 11:09:23.509: INFO: Number of nodes with available pods: 0
Jul 27 11:09:23.509: INFO: Number of running nodes: 0, number of available pods: 0
Jul 27 11:09:23.512: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7444/daemonsets","resourceVersion":"4556302"},"items":null}

Jul 27 11:09:23.515: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7444/pods","resourceVersion":"4556302"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:09:23.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7444" for this suite.

• [SLOW TEST:32.364 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":126,"skipped":2407,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:09:23.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-438d741e-e4f6-40a3-8b8f-f37758ad87f3
STEP: Creating a pod to test consume configMaps
Jul 27 11:09:23.656: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1fae8696-639a-43f6-8db6-2d97413304ee" in namespace "projected-1376" to be "Succeeded or Failed"
Jul 27 11:09:23.661: INFO: Pod "pod-projected-configmaps-1fae8696-639a-43f6-8db6-2d97413304ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.567456ms
Jul 27 11:09:25.699: INFO: Pod "pod-projected-configmaps-1fae8696-639a-43f6-8db6-2d97413304ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043279364s
Jul 27 11:09:27.703: INFO: Pod "pod-projected-configmaps-1fae8696-639a-43f6-8db6-2d97413304ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047033668s
STEP: Saw pod success
Jul 27 11:09:27.703: INFO: Pod "pod-projected-configmaps-1fae8696-639a-43f6-8db6-2d97413304ee" satisfied condition "Succeeded or Failed"
Jul 27 11:09:27.706: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-1fae8696-639a-43f6-8db6-2d97413304ee container projected-configmap-volume-test: 
STEP: delete the pod
Jul 27 11:09:28.114: INFO: Waiting for pod pod-projected-configmaps-1fae8696-639a-43f6-8db6-2d97413304ee to disappear
Jul 27 11:09:28.133: INFO: Pod pod-projected-configmaps-1fae8696-639a-43f6-8db6-2d97413304ee no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:09:28.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1376" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":127,"skipped":2417,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:09:28.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jul 27 11:09:28.418: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2809 /api/v1/namespaces/watch-2809/configmaps/e2e-watch-test-configmap-a b8513ca0-30aa-4f4e-9367-9c2a98f3cd44 4556338 0 2020-07-27 11:09:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-27 11:09:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 27 11:09:28.418: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2809 /api/v1/namespaces/watch-2809/configmaps/e2e-watch-test-configmap-a b8513ca0-30aa-4f4e-9367-9c2a98f3cd44 4556338 0 2020-07-27 11:09:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-27 11:09:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jul 27 11:09:38.426: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2809 /api/v1/namespaces/watch-2809/configmaps/e2e-watch-test-configmap-a b8513ca0-30aa-4f4e-9367-9c2a98f3cd44 4556408 0 2020-07-27 11:09:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-27 11:09:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 27 11:09:38.426: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2809 /api/v1/namespaces/watch-2809/configmaps/e2e-watch-test-configmap-a b8513ca0-30aa-4f4e-9367-9c2a98f3cd44 4556408 0 2020-07-27 11:09:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-27 11:09:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jul 27 11:09:48.435: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2809 /api/v1/namespaces/watch-2809/configmaps/e2e-watch-test-configmap-a b8513ca0-30aa-4f4e-9367-9c2a98f3cd44 4556438 0 2020-07-27 11:09:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-27 11:09:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 27 11:09:48.435: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2809 /api/v1/namespaces/watch-2809/configmaps/e2e-watch-test-configmap-a b8513ca0-30aa-4f4e-9367-9c2a98f3cd44 4556438 0 2020-07-27 11:09:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-27 11:09:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jul 27 11:09:58.442: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2809 /api/v1/namespaces/watch-2809/configmaps/e2e-watch-test-configmap-a b8513ca0-30aa-4f4e-9367-9c2a98f3cd44 4556468 0 2020-07-27 11:09:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-27 11:09:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 27 11:09:58.443: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2809 /api/v1/namespaces/watch-2809/configmaps/e2e-watch-test-configmap-a b8513ca0-30aa-4f4e-9367-9c2a98f3cd44 4556468 0 2020-07-27 11:09:28 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  [{e2e.test Update v1 2020-07-27 11:09:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jul 27 11:10:08.450: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2809 /api/v1/namespaces/watch-2809/configmaps/e2e-watch-test-configmap-b 4362458c-3e43-481f-b6b7-9afa82f95b73 4556497 0 2020-07-27 11:10:08 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-07-27 11:10:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 27 11:10:08.450: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2809 /api/v1/namespaces/watch-2809/configmaps/e2e-watch-test-configmap-b 4362458c-3e43-481f-b6b7-9afa82f95b73 4556497 0 2020-07-27 11:10:08 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-07-27 11:10:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jul 27 11:10:18.460: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2809 /api/v1/namespaces/watch-2809/configmaps/e2e-watch-test-configmap-b 4362458c-3e43-481f-b6b7-9afa82f95b73 4556526 0 2020-07-27 11:10:08 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-07-27 11:10:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 27 11:10:18.460: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2809 /api/v1/namespaces/watch-2809/configmaps/e2e-watch-test-configmap-b 4362458c-3e43-481f-b6b7-9afa82f95b73 4556526 0 2020-07-27 11:10:08 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  [{e2e.test Update v1 2020-07-27 11:10:08 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:10:28.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2809" for this suite.

• [SLOW TEST:60.295 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":128,"skipped":2433,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:10:28.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jul 27 11:10:32.602: INFO: &Pod{ObjectMeta:{send-events-23694d0a-c04b-4fc3-a34b-769daa67a891  events-7039 /api/v1/namespaces/events-7039/pods/send-events-23694d0a-c04b-4fc3-a34b-769daa67a891 c4c79930-617d-4393-bf90-2357bbe50f9e 4556577 0 2020-07-27 11:10:28 +0000 UTC   map[name:foo time:575380659] map[] [] []  [{e2e.test Update v1 2020-07-27 11:10:28 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:10:31 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 55 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-d9m8s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-d9m8s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-d9m8s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:10:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:10:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:10:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:10:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.178,StartTime:2020-07-27 11:10:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-27 11:10:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://14165b239e0ce7f4cb59c248d58ba4bb0021f9fbd3108ce3ff03c22894520a48,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.178,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jul 27 11:10:34.607: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jul 27 11:10:36.612: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:10:36.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-7039" for this suite.

• [SLOW TEST:8.184 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":129,"skipped":2459,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:10:36.659: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-276.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-276.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-276.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-276.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-276.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-276.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 27 11:10:42.799: INFO: DNS probes using dns-276/dns-test-dea7bf9a-9304-4f08-b06e-f0bf5b6a6160 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:10:42.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-276" for this suite.

• [SLOW TEST:6.250 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":130,"skipped":2483,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:10:42.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Jul 27 11:10:43.452: INFO: Waiting up to 5m0s for pod "client-containers-53a0b0ae-0e43-4c0b-a9fa-b5e777d2b8c7" in namespace "containers-9068" to be "Succeeded or Failed"
Jul 27 11:10:43.514: INFO: Pod "client-containers-53a0b0ae-0e43-4c0b-a9fa-b5e777d2b8c7": Phase="Pending", Reason="", readiness=false. Elapsed: 62.551045ms
Jul 27 11:10:45.587: INFO: Pod "client-containers-53a0b0ae-0e43-4c0b-a9fa-b5e777d2b8c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135069515s
Jul 27 11:10:47.754: INFO: Pod "client-containers-53a0b0ae-0e43-4c0b-a9fa-b5e777d2b8c7": Phase="Running", Reason="", readiness=true. Elapsed: 4.302515851s
Jul 27 11:10:49.758: INFO: Pod "client-containers-53a0b0ae-0e43-4c0b-a9fa-b5e777d2b8c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.306793409s
STEP: Saw pod success
Jul 27 11:10:49.758: INFO: Pod "client-containers-53a0b0ae-0e43-4c0b-a9fa-b5e777d2b8c7" satisfied condition "Succeeded or Failed"
Jul 27 11:10:49.761: INFO: Trying to get logs from node kali-worker2 pod client-containers-53a0b0ae-0e43-4c0b-a9fa-b5e777d2b8c7 container test-container: 
STEP: delete the pod
Jul 27 11:10:49.796: INFO: Waiting for pod client-containers-53a0b0ae-0e43-4c0b-a9fa-b5e777d2b8c7 to disappear
Jul 27 11:10:49.825: INFO: Pod client-containers-53a0b0ae-0e43-4c0b-a9fa-b5e777d2b8c7 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:10:49.825: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-9068" for this suite.

• [SLOW TEST:6.923 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":131,"skipped":2493,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:10:49.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-897/configmap-test-20bda257-5a9b-4e30-8d59-7aea6b0162b7
STEP: Creating a pod to test consume configMaps
Jul 27 11:10:49.917: INFO: Waiting up to 5m0s for pod "pod-configmaps-8b32bd17-4d94-415c-b43a-89d01abea2cd" in namespace "configmap-897" to be "Succeeded or Failed"
Jul 27 11:10:49.939: INFO: Pod "pod-configmaps-8b32bd17-4d94-415c-b43a-89d01abea2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.390401ms
Jul 27 11:10:52.059: INFO: Pod "pod-configmaps-8b32bd17-4d94-415c-b43a-89d01abea2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142635408s
Jul 27 11:10:54.063: INFO: Pod "pod-configmaps-8b32bd17-4d94-415c-b43a-89d01abea2cd": Phase="Running", Reason="", readiness=true. Elapsed: 4.146779608s
Jul 27 11:10:56.068: INFO: Pod "pod-configmaps-8b32bd17-4d94-415c-b43a-89d01abea2cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.151868271s
STEP: Saw pod success
Jul 27 11:10:56.069: INFO: Pod "pod-configmaps-8b32bd17-4d94-415c-b43a-89d01abea2cd" satisfied condition "Succeeded or Failed"
Jul 27 11:10:56.072: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-8b32bd17-4d94-415c-b43a-89d01abea2cd container env-test: 
STEP: delete the pod
Jul 27 11:10:56.095: INFO: Waiting for pod pod-configmaps-8b32bd17-4d94-415c-b43a-89d01abea2cd to disappear
Jul 27 11:10:56.099: INFO: Pod pod-configmaps-8b32bd17-4d94-415c-b43a-89d01abea2cd no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:10:56.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-897" for this suite.

• [SLOW TEST:6.275 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2504,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:10:56.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jul 27 11:11:00.258: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-989 PodName:pod-sharedvolume-4e543469-4f1b-4404-b7e8-c2710f775d56 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 27 11:11:00.258: INFO: >>> kubeConfig: /root/.kube/config
I0727 11:11:00.289683       7 log.go:172] (0xc002c70580) (0xc001e877c0) Create stream
I0727 11:11:00.289714       7 log.go:172] (0xc002c70580) (0xc001e877c0) Stream added, broadcasting: 1
I0727 11:11:00.291867       7 log.go:172] (0xc002c70580) Reply frame received for 1
I0727 11:11:00.291913       7 log.go:172] (0xc002c70580) (0xc001e879a0) Create stream
I0727 11:11:00.291924       7 log.go:172] (0xc002c70580) (0xc001e879a0) Stream added, broadcasting: 3
I0727 11:11:00.293196       7 log.go:172] (0xc002c70580) Reply frame received for 3
I0727 11:11:00.293236       7 log.go:172] (0xc002c70580) (0xc00139adc0) Create stream
I0727 11:11:00.293260       7 log.go:172] (0xc002c70580) (0xc00139adc0) Stream added, broadcasting: 5
I0727 11:11:00.294434       7 log.go:172] (0xc002c70580) Reply frame received for 5
I0727 11:11:00.359220       7 log.go:172] (0xc002c70580) Data frame received for 3
I0727 11:11:00.359286       7 log.go:172] (0xc001e879a0) (3) Data frame handling
I0727 11:11:00.359313       7 log.go:172] (0xc001e879a0) (3) Data frame sent
I0727 11:11:00.359331       7 log.go:172] (0xc002c70580) Data frame received for 3
I0727 11:11:00.359348       7 log.go:172] (0xc001e879a0) (3) Data frame handling
I0727 11:11:00.359389       7 log.go:172] (0xc002c70580) Data frame received for 5
I0727 11:11:00.359428       7 log.go:172] (0xc00139adc0) (5) Data frame handling
I0727 11:11:00.361218       7 log.go:172] (0xc002c70580) Data frame received for 1
I0727 11:11:00.361272       7 log.go:172] (0xc001e877c0) (1) Data frame handling
I0727 11:11:00.361304       7 log.go:172] (0xc001e877c0) (1) Data frame sent
I0727 11:11:00.361322       7 log.go:172] (0xc002c70580) (0xc001e877c0) Stream removed, broadcasting: 1
I0727 11:11:00.361341       7 log.go:172] (0xc002c70580) Go away received
I0727 11:11:00.361503       7 log.go:172] (0xc002c70580) (0xc001e877c0) Stream removed, broadcasting: 1
I0727 11:11:00.361535       7 log.go:172] (0xc002c70580) (0xc001e879a0) Stream removed, broadcasting: 3
I0727 11:11:00.361551       7 log.go:172] (0xc002c70580) (0xc00139adc0) Stream removed, broadcasting: 5
Jul 27 11:11:00.361: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:11:00.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-989" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":133,"skipped":2533,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:11:00.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:11:00.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul 27 11:11:03.423: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8031 create -f -'
Jul 27 11:11:07.342: INFO: stderr: ""
Jul 27 11:11:07.342: INFO: stdout: "e2e-test-crd-publish-openapi-9275-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jul 27 11:11:07.342: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8031 delete e2e-test-crd-publish-openapi-9275-crds test-cr'
Jul 27 11:11:07.464: INFO: stderr: ""
Jul 27 11:11:07.465: INFO: stdout: "e2e-test-crd-publish-openapi-9275-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Jul 27 11:11:07.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8031 apply -f -'
Jul 27 11:11:07.870: INFO: stderr: ""
Jul 27 11:11:07.870: INFO: stdout: "e2e-test-crd-publish-openapi-9275-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Jul 27 11:11:07.870: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8031 delete e2e-test-crd-publish-openapi-9275-crds test-cr'
Jul 27 11:11:07.973: INFO: stderr: ""
Jul 27 11:11:07.973: INFO: stdout: "e2e-test-crd-publish-openapi-9275-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Jul 27 11:11:07.973: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9275-crds'
Jul 27 11:11:08.423: INFO: stderr: ""
Jul 27 11:11:08.423: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9275-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:11:10.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8031" for this suite.

• [SLOW TEST:10.011 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":134,"skipped":2585,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:11:10.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:11:14.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2616" for this suite.

• [SLOW TEST:5.071 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":135,"skipped":2589,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:11:15.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:11:15.930: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-39037334-a68f-451a-80d6-091d4929dfe4" in namespace "security-context-test-8648" to be "Succeeded or Failed"
Jul 27 11:11:16.019: INFO: Pod "busybox-readonly-false-39037334-a68f-451a-80d6-091d4929dfe4": Phase="Pending", Reason="", readiness=false. Elapsed: 89.555173ms
Jul 27 11:11:18.149: INFO: Pod "busybox-readonly-false-39037334-a68f-451a-80d6-091d4929dfe4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.219211241s
Jul 27 11:11:20.153: INFO: Pod "busybox-readonly-false-39037334-a68f-451a-80d6-091d4929dfe4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.223432431s
Jul 27 11:11:20.153: INFO: Pod "busybox-readonly-false-39037334-a68f-451a-80d6-091d4929dfe4" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:11:20.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8648" for this suite.
•{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":136,"skipped":2601,"failed":0}
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:11:20.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-map-527cc457-1283-4d49-a700-be5f30223fdd
STEP: Creating a pod to test consume secrets
Jul 27 11:11:20.283: INFO: Waiting up to 5m0s for pod "pod-secrets-8aa703fd-28bc-4031-8182-dd500cadd5a3" in namespace "secrets-9094" to be "Succeeded or Failed"
Jul 27 11:11:20.340: INFO: Pod "pod-secrets-8aa703fd-28bc-4031-8182-dd500cadd5a3": Phase="Pending", Reason="", readiness=false. Elapsed: 56.824578ms
Jul 27 11:11:22.343: INFO: Pod "pod-secrets-8aa703fd-28bc-4031-8182-dd500cadd5a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060270079s
Jul 27 11:11:24.347: INFO: Pod "pod-secrets-8aa703fd-28bc-4031-8182-dd500cadd5a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.064017816s
STEP: Saw pod success
Jul 27 11:11:24.347: INFO: Pod "pod-secrets-8aa703fd-28bc-4031-8182-dd500cadd5a3" satisfied condition "Succeeded or Failed"
Jul 27 11:11:24.350: INFO: Trying to get logs from node kali-worker pod pod-secrets-8aa703fd-28bc-4031-8182-dd500cadd5a3 container secret-volume-test: 
STEP: delete the pod
Jul 27 11:11:24.377: INFO: Waiting for pod pod-secrets-8aa703fd-28bc-4031-8182-dd500cadd5a3 to disappear
Jul 27 11:11:24.392: INFO: Pod pod-secrets-8aa703fd-28bc-4031-8182-dd500cadd5a3 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:11:24.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9094" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":137,"skipped":2607,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:11:24.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0727 11:11:25.702569       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 27 11:11:25.702: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:11:25.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-158" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":138,"skipped":2653,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:11:25.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 27 11:11:27.105: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 27 11:11:29.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445087, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445087, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445087, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445087, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 27 11:11:32.177: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:11:32.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3340" for this suite.
STEP: Destroying namespace "webhook-3340-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.884 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":139,"skipped":2673,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:11:32.642: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test substitution in container's args
Jul 27 11:11:32.792: INFO: Waiting up to 5m0s for pod "var-expansion-0650407e-b76e-49fc-8740-3aea939d2f83" in namespace "var-expansion-7195" to be "Succeeded or Failed"
Jul 27 11:11:32.802: INFO: Pod "var-expansion-0650407e-b76e-49fc-8740-3aea939d2f83": Phase="Pending", Reason="", readiness=false. Elapsed: 9.93104ms
Jul 27 11:11:34.838: INFO: Pod "var-expansion-0650407e-b76e-49fc-8740-3aea939d2f83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04553358s
Jul 27 11:11:36.842: INFO: Pod "var-expansion-0650407e-b76e-49fc-8740-3aea939d2f83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049920649s
STEP: Saw pod success
Jul 27 11:11:36.842: INFO: Pod "var-expansion-0650407e-b76e-49fc-8740-3aea939d2f83" satisfied condition "Succeeded or Failed"
Jul 27 11:11:36.845: INFO: Trying to get logs from node kali-worker2 pod var-expansion-0650407e-b76e-49fc-8740-3aea939d2f83 container dapi-container: 
STEP: delete the pod
Jul 27 11:11:36.883: INFO: Waiting for pod var-expansion-0650407e-b76e-49fc-8740-3aea939d2f83 to disappear
Jul 27 11:11:36.909: INFO: Pod var-expansion-0650407e-b76e-49fc-8740-3aea939d2f83 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:11:36.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7195" for this suite.
•{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2681,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:11:36.916: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 27 11:11:37.283: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7cbc1dc-fbd7-4d86-a7e3-ff7f888ae9ef" in namespace "downward-api-9607" to be "Succeeded or Failed"
Jul 27 11:11:37.309: INFO: Pod "downwardapi-volume-a7cbc1dc-fbd7-4d86-a7e3-ff7f888ae9ef": Phase="Pending", Reason="", readiness=false. Elapsed: 26.206032ms
Jul 27 11:11:39.389: INFO: Pod "downwardapi-volume-a7cbc1dc-fbd7-4d86-a7e3-ff7f888ae9ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106298109s
Jul 27 11:11:41.393: INFO: Pod "downwardapi-volume-a7cbc1dc-fbd7-4d86-a7e3-ff7f888ae9ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110371574s
STEP: Saw pod success
Jul 27 11:11:41.393: INFO: Pod "downwardapi-volume-a7cbc1dc-fbd7-4d86-a7e3-ff7f888ae9ef" satisfied condition "Succeeded or Failed"
Jul 27 11:11:41.396: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-a7cbc1dc-fbd7-4d86-a7e3-ff7f888ae9ef container client-container: 
STEP: delete the pod
Jul 27 11:11:41.486: INFO: Waiting for pod downwardapi-volume-a7cbc1dc-fbd7-4d86-a7e3-ff7f888ae9ef to disappear
Jul 27 11:11:41.497: INFO: Pod downwardapi-volume-a7cbc1dc-fbd7-4d86-a7e3-ff7f888ae9ef no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:11:41.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9607" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2694,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:11:41.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-7694f97b-5511-4c98-9510-c491227ee841
STEP: Creating a pod to test consume configMaps
Jul 27 11:11:41.583: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-daf53f48-ddcb-4432-8401-c9fba3b3659b" in namespace "projected-5666" to be "Succeeded or Failed"
Jul 27 11:11:41.610: INFO: Pod "pod-projected-configmaps-daf53f48-ddcb-4432-8401-c9fba3b3659b": Phase="Pending", Reason="", readiness=false. Elapsed: 26.631722ms
Jul 27 11:11:43.614: INFO: Pod "pod-projected-configmaps-daf53f48-ddcb-4432-8401-c9fba3b3659b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030661816s
Jul 27 11:11:45.619: INFO: Pod "pod-projected-configmaps-daf53f48-ddcb-4432-8401-c9fba3b3659b": Phase="Running", Reason="", readiness=true. Elapsed: 4.035094577s
Jul 27 11:11:47.622: INFO: Pod "pod-projected-configmaps-daf53f48-ddcb-4432-8401-c9fba3b3659b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038620128s
STEP: Saw pod success
Jul 27 11:11:47.622: INFO: Pod "pod-projected-configmaps-daf53f48-ddcb-4432-8401-c9fba3b3659b" satisfied condition "Succeeded or Failed"
Jul 27 11:11:47.625: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-daf53f48-ddcb-4432-8401-c9fba3b3659b container projected-configmap-volume-test: 
STEP: delete the pod
Jul 27 11:11:47.683: INFO: Waiting for pod pod-projected-configmaps-daf53f48-ddcb-4432-8401-c9fba3b3659b to disappear
Jul 27 11:11:47.689: INFO: Pod pod-projected-configmaps-daf53f48-ddcb-4432-8401-c9fba3b3659b no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:11:47.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5666" for this suite.

• [SLOW TEST:6.191 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2718,"failed":0}
SSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:11:47.696: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4769 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4769;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4769 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4769;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4769.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4769.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4769.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4769.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4769.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4769.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4769.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4769.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4769.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4769.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4769.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4769.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4769.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 35.148.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.148.35_udp@PTR;check="$$(dig +tcp +noall +answer +search 35.148.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.148.35_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4769 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4769;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4769 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4769;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4769.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4769.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4769.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4769.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4769.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4769.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4769.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4769.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4769.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4769.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4769.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4769.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4769.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 35.148.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.148.35_udp@PTR;check="$$(dig +tcp +noall +answer +search 35.148.108.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.108.148.35_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 27 11:11:53.930: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.139: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.143: INFO: Unable to read wheezy_udp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.147: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.150: INFO: Unable to read wheezy_udp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.153: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.157: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.160: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.183: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.187: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.199: INFO: Unable to read jessie_udp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.202: INFO: Unable to read jessie_tcp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.205: INFO: Unable to read jessie_udp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.208: INFO: Unable to read jessie_tcp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.211: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.214: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:54.233: INFO: Lookups using dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4769 wheezy_tcp@dns-test-service.dns-4769 wheezy_udp@dns-test-service.dns-4769.svc wheezy_tcp@dns-test-service.dns-4769.svc wheezy_udp@_http._tcp.dns-test-service.dns-4769.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4769.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4769 jessie_tcp@dns-test-service.dns-4769 jessie_udp@dns-test-service.dns-4769.svc jessie_tcp@dns-test-service.dns-4769.svc jessie_udp@_http._tcp.dns-test-service.dns-4769.svc jessie_tcp@_http._tcp.dns-test-service.dns-4769.svc]

Jul 27 11:11:59.237: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.275: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.279: INFO: Unable to read wheezy_udp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.282: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.285: INFO: Unable to read wheezy_udp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.288: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.290: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.293: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.317: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.320: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.323: INFO: Unable to read jessie_udp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.325: INFO: Unable to read jessie_tcp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.327: INFO: Unable to read jessie_udp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.330: INFO: Unable to read jessie_tcp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.332: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.335: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:11:59.352: INFO: Lookups using dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4769 wheezy_tcp@dns-test-service.dns-4769 wheezy_udp@dns-test-service.dns-4769.svc wheezy_tcp@dns-test-service.dns-4769.svc wheezy_udp@_http._tcp.dns-test-service.dns-4769.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4769.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4769 jessie_tcp@dns-test-service.dns-4769 jessie_udp@dns-test-service.dns-4769.svc jessie_tcp@dns-test-service.dns-4769.svc jessie_udp@_http._tcp.dns-test-service.dns-4769.svc jessie_tcp@_http._tcp.dns-test-service.dns-4769.svc]

Jul 27 11:12:04.288: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.291: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.294: INFO: Unable to read wheezy_udp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.297: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.300: INFO: Unable to read wheezy_udp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.303: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.306: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.309: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.355: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.358: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.361: INFO: Unable to read jessie_udp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.365: INFO: Unable to read jessie_tcp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.368: INFO: Unable to read jessie_udp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.371: INFO: Unable to read jessie_tcp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.374: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.377: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:04.393: INFO: Lookups using dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4769 wheezy_tcp@dns-test-service.dns-4769 wheezy_udp@dns-test-service.dns-4769.svc wheezy_tcp@dns-test-service.dns-4769.svc wheezy_udp@_http._tcp.dns-test-service.dns-4769.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4769.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4769 jessie_tcp@dns-test-service.dns-4769 jessie_udp@dns-test-service.dns-4769.svc jessie_tcp@dns-test-service.dns-4769.svc jessie_udp@_http._tcp.dns-test-service.dns-4769.svc jessie_tcp@_http._tcp.dns-test-service.dns-4769.svc]

Jul 27 11:12:09.238: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.242: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.246: INFO: Unable to read wheezy_udp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.250: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.253: INFO: Unable to read wheezy_udp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.258: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.262: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.265: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.299: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.302: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.304: INFO: Unable to read jessie_udp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.307: INFO: Unable to read jessie_tcp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.310: INFO: Unable to read jessie_udp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.312: INFO: Unable to read jessie_tcp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.314: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.317: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:09.334: INFO: Lookups using dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4769 wheezy_tcp@dns-test-service.dns-4769 wheezy_udp@dns-test-service.dns-4769.svc wheezy_tcp@dns-test-service.dns-4769.svc wheezy_udp@_http._tcp.dns-test-service.dns-4769.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4769.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4769 jessie_tcp@dns-test-service.dns-4769 jessie_udp@dns-test-service.dns-4769.svc jessie_tcp@dns-test-service.dns-4769.svc jessie_udp@_http._tcp.dns-test-service.dns-4769.svc jessie_tcp@_http._tcp.dns-test-service.dns-4769.svc]

Jul 27 11:12:14.239: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.243: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.247: INFO: Unable to read wheezy_udp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.250: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.253: INFO: Unable to read wheezy_udp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.255: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.258: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.260: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.289: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.329: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.333: INFO: Unable to read jessie_udp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.337: INFO: Unable to read jessie_tcp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.340: INFO: Unable to read jessie_udp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.342: INFO: Unable to read jessie_tcp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.345: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.348: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:14.366: INFO: Lookups using dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4769 wheezy_tcp@dns-test-service.dns-4769 wheezy_udp@dns-test-service.dns-4769.svc wheezy_tcp@dns-test-service.dns-4769.svc wheezy_udp@_http._tcp.dns-test-service.dns-4769.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4769.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4769 jessie_tcp@dns-test-service.dns-4769 jessie_udp@dns-test-service.dns-4769.svc jessie_tcp@dns-test-service.dns-4769.svc jessie_udp@_http._tcp.dns-test-service.dns-4769.svc jessie_tcp@_http._tcp.dns-test-service.dns-4769.svc]

Jul 27 11:12:19.240: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.244: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.247: INFO: Unable to read wheezy_udp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.250: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.254: INFO: Unable to read wheezy_udp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.257: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.260: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.264: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.288: INFO: Unable to read jessie_udp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.291: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.294: INFO: Unable to read jessie_udp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.296: INFO: Unable to read jessie_tcp@dns-test-service.dns-4769 from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.299: INFO: Unable to read jessie_udp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.302: INFO: Unable to read jessie_tcp@dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.304: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.307: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4769.svc from pod dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36: the server could not find the requested resource (get pods dns-test-97328e2b-e93b-415f-bbe8-79496181eb36)
Jul 27 11:12:19.324: INFO: Lookups using dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-4769 wheezy_tcp@dns-test-service.dns-4769 wheezy_udp@dns-test-service.dns-4769.svc wheezy_tcp@dns-test-service.dns-4769.svc wheezy_udp@_http._tcp.dns-test-service.dns-4769.svc wheezy_tcp@_http._tcp.dns-test-service.dns-4769.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-4769 jessie_tcp@dns-test-service.dns-4769 jessie_udp@dns-test-service.dns-4769.svc jessie_tcp@dns-test-service.dns-4769.svc jessie_udp@_http._tcp.dns-test-service.dns-4769.svc jessie_tcp@_http._tcp.dns-test-service.dns-4769.svc]

Jul 27 11:12:24.336: INFO: DNS probes using dns-4769/dns-test-97328e2b-e93b-415f-bbe8-79496181eb36 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:12:25.040: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4769" for this suite.

• [SLOW TEST:37.354 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":143,"skipped":2724,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:12:25.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:12:25.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jul 27 11:12:28.113: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3885 create -f -'
Jul 27 11:12:31.996: INFO: stderr: ""
Jul 27 11:12:31.996: INFO: stdout: "e2e-test-crd-publish-openapi-5080-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul 27 11:12:31.996: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3885 delete e2e-test-crd-publish-openapi-5080-crds test-cr'
Jul 27 11:12:32.102: INFO: stderr: ""
Jul 27 11:12:32.102: INFO: stdout: "e2e-test-crd-publish-openapi-5080-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jul 27 11:12:32.103: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3885 apply -f -'
Jul 27 11:12:32.409: INFO: stderr: ""
Jul 27 11:12:32.409: INFO: stdout: "e2e-test-crd-publish-openapi-5080-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jul 27 11:12:32.409: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3885 delete e2e-test-crd-publish-openapi-5080-crds test-cr'
Jul 27 11:12:32.537: INFO: stderr: ""
Jul 27 11:12:32.537: INFO: stdout: "e2e-test-crd-publish-openapi-5080-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jul 27 11:12:32.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5080-crds'
Jul 27 11:12:32.780: INFO: stderr: ""
Jul 27 11:12:32.780: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5080-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:12:35.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3885" for this suite.

• [SLOW TEST:10.695 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":144,"skipped":2732,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:12:35.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8893.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8893.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8893.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8893.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8893.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8893.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8893.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8893.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8893.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 27 11:12:41.953: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:41.957: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:41.982: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:41.985: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:41.995: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:41.998: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:42.002: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:42.005: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:42.011: INFO: Lookups using dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8893.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local jessie_udp@dns-test-service-2.dns-8893.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8893.svc.cluster.local]

Jul 27 11:12:47.017: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:47.021: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:47.026: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:47.028: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:47.036: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:47.039: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:47.042: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:47.045: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:47.051: INFO: Lookups using dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8893.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local jessie_udp@dns-test-service-2.dns-8893.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8893.svc.cluster.local]

Jul 27 11:12:52.017: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:52.021: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:52.024: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:52.028: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:52.038: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:52.042: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:52.045: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:52.048: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:52.055: INFO: Lookups using dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8893.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local jessie_udp@dns-test-service-2.dns-8893.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8893.svc.cluster.local]

Jul 27 11:12:57.017: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:57.021: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:57.025: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:57.028: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:57.037: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:57.040: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:57.043: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:57.046: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:12:57.051: INFO: Lookups using dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8893.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local jessie_udp@dns-test-service-2.dns-8893.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8893.svc.cluster.local]

Jul 27 11:13:02.016: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:02.020: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:02.023: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:02.026: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:02.036: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:02.039: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:02.042: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:02.046: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:02.053: INFO: Lookups using dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8893.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local jessie_udp@dns-test-service-2.dns-8893.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8893.svc.cluster.local]

Jul 27 11:13:07.021: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:07.025: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:07.028: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:07.031: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:07.039: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:07.042: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:07.045: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:07.048: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:07.054: INFO: Lookups using dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8893.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local jessie_udp@dns-test-service-2.dns-8893.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8893.svc.cluster.local]

Jul 27 11:13:12.022: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:12.025: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local from pod dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd: the server could not find the requested resource (get pods dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd)
Jul 27 11:13:12.049: INFO: Lookups using dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd failed for: [wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8893.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8893.svc.cluster.local]

Jul 27 11:13:17.055: INFO: DNS probes using dns-8893/dns-test-8cf71ad2-5f44-4b9f-929f-55a54af450fd succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:13:19.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8893" for this suite.

• [SLOW TEST:44.685 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":145,"skipped":2755,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:13:20.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:13:33.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-936" for this suite.

• [SLOW TEST:12.650 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":146,"skipped":2761,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:13:33.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:13:33.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-8550" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":147,"skipped":2765,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:13:33.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul 27 11:13:33.250: INFO: Waiting up to 5m0s for pod "downward-api-56d9eca7-cded-4a9a-82a1-c67ccc922765" in namespace "downward-api-2250" to be "Succeeded or Failed"
Jul 27 11:13:33.254: INFO: Pod "downward-api-56d9eca7-cded-4a9a-82a1-c67ccc922765": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136572ms
Jul 27 11:13:35.270: INFO: Pod "downward-api-56d9eca7-cded-4a9a-82a1-c67ccc922765": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020184757s
Jul 27 11:13:37.281: INFO: Pod "downward-api-56d9eca7-cded-4a9a-82a1-c67ccc922765": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031726584s
STEP: Saw pod success
Jul 27 11:13:37.281: INFO: Pod "downward-api-56d9eca7-cded-4a9a-82a1-c67ccc922765" satisfied condition "Succeeded or Failed"
Jul 27 11:13:37.284: INFO: Trying to get logs from node kali-worker2 pod downward-api-56d9eca7-cded-4a9a-82a1-c67ccc922765 container dapi-container: 
STEP: delete the pod
Jul 27 11:13:37.332: INFO: Waiting for pod downward-api-56d9eca7-cded-4a9a-82a1-c67ccc922765 to disappear
Jul 27 11:13:37.337: INFO: Pod downward-api-56d9eca7-cded-4a9a-82a1-c67ccc922765 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:13:37.338: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2250" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":148,"skipped":2776,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:13:37.344: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 27 11:13:38.034: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 27 11:13:40.043: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445218, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445218, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445218, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445218, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 27 11:13:43.102: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Jul 27 11:13:47.193: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config attach --namespace=webhook-1007 to-be-attached-pod -i -c=container1'
Jul 27 11:13:47.308: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:13:47.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1007" for this suite.
STEP: Destroying namespace "webhook-1007-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.038 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":149,"skipped":2782,"failed":0}
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:13:47.382: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Jul 27 11:13:47.460: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:13:55.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9200" for this suite.

• [SLOW TEST:7.944 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":150,"skipped":2787,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:13:55.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 27 11:13:55.693: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 27 11:13:57.731: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445235, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445235, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445235, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445235, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 27 11:14:00.782: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:14:00.941: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2119-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:14:03.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7045" for this suite.
STEP: Destroying namespace "webhook-7045-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:9.878 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":151,"skipped":2794,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:14:05.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-59ce4150-d990-465a-8803-ef3c6ca4f564
STEP: Creating a pod to test consume secrets
Jul 27 11:14:06.390: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-56922293-fa36-4eba-99c4-a1c0a9a3535a" in namespace "projected-9831" to be "Succeeded or Failed"
Jul 27 11:14:06.648: INFO: Pod "pod-projected-secrets-56922293-fa36-4eba-99c4-a1c0a9a3535a": Phase="Pending", Reason="", readiness=false. Elapsed: 258.163313ms
Jul 27 11:14:08.652: INFO: Pod "pod-projected-secrets-56922293-fa36-4eba-99c4-a1c0a9a3535a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.262785847s
Jul 27 11:14:10.655: INFO: Pod "pod-projected-secrets-56922293-fa36-4eba-99c4-a1c0a9a3535a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.265564868s
Jul 27 11:14:12.659: INFO: Pod "pod-projected-secrets-56922293-fa36-4eba-99c4-a1c0a9a3535a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.269533342s
STEP: Saw pod success
Jul 27 11:14:12.659: INFO: Pod "pod-projected-secrets-56922293-fa36-4eba-99c4-a1c0a9a3535a" satisfied condition "Succeeded or Failed"
Jul 27 11:14:12.662: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-56922293-fa36-4eba-99c4-a1c0a9a3535a container projected-secret-volume-test: 
STEP: delete the pod
Jul 27 11:14:12.713: INFO: Waiting for pod pod-projected-secrets-56922293-fa36-4eba-99c4-a1c0a9a3535a to disappear
Jul 27 11:14:12.724: INFO: Pod pod-projected-secrets-56922293-fa36-4eba-99c4-a1c0a9a3535a no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:14:12.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9831" for this suite.

• [SLOW TEST:7.526 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2818,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:14:12.732: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 27 11:14:13.003: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81b0f035-bf2f-450a-9000-d421a452474e" in namespace "downward-api-3893" to be "Succeeded or Failed"
Jul 27 11:14:13.163: INFO: Pod "downwardapi-volume-81b0f035-bf2f-450a-9000-d421a452474e": Phase="Pending", Reason="", readiness=false. Elapsed: 160.593741ms
Jul 27 11:14:15.167: INFO: Pod "downwardapi-volume-81b0f035-bf2f-450a-9000-d421a452474e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164571754s
Jul 27 11:14:17.172: INFO: Pod "downwardapi-volume-81b0f035-bf2f-450a-9000-d421a452474e": Phase="Running", Reason="", readiness=true. Elapsed: 4.169137841s
Jul 27 11:14:19.176: INFO: Pod "downwardapi-volume-81b0f035-bf2f-450a-9000-d421a452474e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.173627588s
STEP: Saw pod success
Jul 27 11:14:19.176: INFO: Pod "downwardapi-volume-81b0f035-bf2f-450a-9000-d421a452474e" satisfied condition "Succeeded or Failed"
Jul 27 11:14:19.180: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-81b0f035-bf2f-450a-9000-d421a452474e container client-container: 
STEP: delete the pod
Jul 27 11:14:19.214: INFO: Waiting for pod downwardapi-volume-81b0f035-bf2f-450a-9000-d421a452474e to disappear
Jul 27 11:14:19.245: INFO: Pod downwardapi-volume-81b0f035-bf2f-450a-9000-d421a452474e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:14:19.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3893" for this suite.

• [SLOW TEST:6.522 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":153,"skipped":2847,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:14:19.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-0577d21f-d7df-425c-b538-cda181b9b3bd
STEP: Creating a pod to test consume secrets
Jul 27 11:14:19.317: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-93886c67-8276-438c-873e-25df4141f2ca" in namespace "projected-9912" to be "Succeeded or Failed"
Jul 27 11:14:19.334: INFO: Pod "pod-projected-secrets-93886c67-8276-438c-873e-25df4141f2ca": Phase="Pending", Reason="", readiness=false. Elapsed: 16.279304ms
Jul 27 11:14:21.337: INFO: Pod "pod-projected-secrets-93886c67-8276-438c-873e-25df4141f2ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020055598s
Jul 27 11:14:23.341: INFO: Pod "pod-projected-secrets-93886c67-8276-438c-873e-25df4141f2ca": Phase="Running", Reason="", readiness=true. Elapsed: 4.023446122s
Jul 27 11:14:25.345: INFO: Pod "pod-projected-secrets-93886c67-8276-438c-873e-25df4141f2ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027093373s
STEP: Saw pod success
Jul 27 11:14:25.345: INFO: Pod "pod-projected-secrets-93886c67-8276-438c-873e-25df4141f2ca" satisfied condition "Succeeded or Failed"
Jul 27 11:14:25.347: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-93886c67-8276-438c-873e-25df4141f2ca container projected-secret-volume-test: 
STEP: delete the pod
Jul 27 11:14:25.409: INFO: Waiting for pod pod-projected-secrets-93886c67-8276-438c-873e-25df4141f2ca to disappear
Jul 27 11:14:25.424: INFO: Pod pod-projected-secrets-93886c67-8276-438c-873e-25df4141f2ca no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:14:25.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9912" for this suite.

• [SLOW TEST:6.178 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2872,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:14:25.433: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jul 27 11:14:25.493: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1374 /api/v1/namespaces/watch-1374/configmaps/e2e-watch-test-label-changed 7aad82f4-640d-43ab-9678-90f2ca517be4 4558193 0 2020-07-27 11:14:25 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-27 11:14:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 27 11:14:25.493: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1374 /api/v1/namespaces/watch-1374/configmaps/e2e-watch-test-label-changed 7aad82f4-640d-43ab-9678-90f2ca517be4 4558194 0 2020-07-27 11:14:25 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-27 11:14:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 27 11:14:25.493: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1374 /api/v1/namespaces/watch-1374/configmaps/e2e-watch-test-label-changed 7aad82f4-640d-43ab-9678-90f2ca517be4 4558195 0 2020-07-27 11:14:25 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-27 11:14:25 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jul 27 11:14:35.601: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1374 /api/v1/namespaces/watch-1374/configmaps/e2e-watch-test-label-changed 7aad82f4-640d-43ab-9678-90f2ca517be4 4558237 0 2020-07-27 11:14:25 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-27 11:14:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 27 11:14:35.601: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1374 /api/v1/namespaces/watch-1374/configmaps/e2e-watch-test-label-changed 7aad82f4-640d-43ab-9678-90f2ca517be4 4558238 0 2020-07-27 11:14:25 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-27 11:14:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 27 11:14:35.602: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-1374 /api/v1/namespaces/watch-1374/configmaps/e2e-watch-test-label-changed 7aad82f4-640d-43ab-9678-90f2ca517be4 4558239 0 2020-07-27 11:14:25 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  [{e2e.test Update v1 2020-07-27 11:14:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:14:35.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1374" for this suite.

• [SLOW TEST:10.177 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":155,"skipped":2878,"failed":0}
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:14:35.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 27 11:14:35.702: INFO: Waiting up to 5m0s for pod "pod-0ae0171e-22bd-479a-9026-7c12fe947d5c" in namespace "emptydir-4052" to be "Succeeded or Failed"
Jul 27 11:14:35.705: INFO: Pod "pod-0ae0171e-22bd-479a-9026-7c12fe947d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.184881ms
Jul 27 11:14:37.708: INFO: Pod "pod-0ae0171e-22bd-479a-9026-7c12fe947d5c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006505791s
Jul 27 11:14:39.712: INFO: Pod "pod-0ae0171e-22bd-479a-9026-7c12fe947d5c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010463583s
STEP: Saw pod success
Jul 27 11:14:39.712: INFO: Pod "pod-0ae0171e-22bd-479a-9026-7c12fe947d5c" satisfied condition "Succeeded or Failed"
Jul 27 11:14:39.715: INFO: Trying to get logs from node kali-worker2 pod pod-0ae0171e-22bd-479a-9026-7c12fe947d5c container test-container: 
STEP: delete the pod
Jul 27 11:14:39.935: INFO: Waiting for pod pod-0ae0171e-22bd-479a-9026-7c12fe947d5c to disappear
Jul 27 11:14:39.975: INFO: Pod pod-0ae0171e-22bd-479a-9026-7c12fe947d5c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:14:39.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4052" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2878,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:14:40.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-secret-kqjn
STEP: Creating a pod to test atomic-volume-subpath
Jul 27 11:14:40.188: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-kqjn" in namespace "subpath-1671" to be "Succeeded or Failed"
Jul 27 11:14:40.203: INFO: Pod "pod-subpath-test-secret-kqjn": Phase="Pending", Reason="", readiness=false. Elapsed: 14.17863ms
Jul 27 11:14:42.250: INFO: Pod "pod-subpath-test-secret-kqjn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062052131s
Jul 27 11:14:44.254: INFO: Pod "pod-subpath-test-secret-kqjn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065766317s
Jul 27 11:14:46.258: INFO: Pod "pod-subpath-test-secret-kqjn": Phase="Running", Reason="", readiness=true. Elapsed: 6.069762192s
Jul 27 11:14:48.262: INFO: Pod "pod-subpath-test-secret-kqjn": Phase="Running", Reason="", readiness=true. Elapsed: 8.073756579s
Jul 27 11:14:50.266: INFO: Pod "pod-subpath-test-secret-kqjn": Phase="Running", Reason="", readiness=true. Elapsed: 10.077758682s
Jul 27 11:14:52.271: INFO: Pod "pod-subpath-test-secret-kqjn": Phase="Running", Reason="", readiness=true. Elapsed: 12.082616739s
Jul 27 11:14:54.276: INFO: Pod "pod-subpath-test-secret-kqjn": Phase="Running", Reason="", readiness=true. Elapsed: 14.087414331s
Jul 27 11:14:56.281: INFO: Pod "pod-subpath-test-secret-kqjn": Phase="Running", Reason="", readiness=true. Elapsed: 16.092242031s
Jul 27 11:14:58.285: INFO: Pod "pod-subpath-test-secret-kqjn": Phase="Running", Reason="", readiness=true. Elapsed: 18.096824979s
Jul 27 11:15:00.290: INFO: Pod "pod-subpath-test-secret-kqjn": Phase="Running", Reason="", readiness=true. Elapsed: 20.101194064s
Jul 27 11:15:02.293: INFO: Pod "pod-subpath-test-secret-kqjn": Phase="Running", Reason="", readiness=true. Elapsed: 22.104632397s
Jul 27 11:15:04.296: INFO: Pod "pod-subpath-test-secret-kqjn": Phase="Running", Reason="", readiness=true. Elapsed: 24.108108504s
Jul 27 11:15:06.301: INFO: Pod "pod-subpath-test-secret-kqjn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.112513625s
STEP: Saw pod success
Jul 27 11:15:06.301: INFO: Pod "pod-subpath-test-secret-kqjn" satisfied condition "Succeeded or Failed"
Jul 27 11:15:06.305: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-kqjn container test-container-subpath-secret-kqjn: 
STEP: delete the pod
Jul 27 11:15:06.361: INFO: Waiting for pod pod-subpath-test-secret-kqjn to disappear
Jul 27 11:15:06.366: INFO: Pod pod-subpath-test-secret-kqjn no longer exists
STEP: Deleting pod pod-subpath-test-secret-kqjn
Jul 27 11:15:06.366: INFO: Deleting pod "pod-subpath-test-secret-kqjn" in namespace "subpath-1671"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:15:06.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1671" for this suite.

• [SLOW TEST:26.375 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":157,"skipped":2934,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:15:06.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
Jul 27 11:15:06.434: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix816472412/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:15:06.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2654" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":158,"skipped":2973,"failed":0}
SSSS
------------------------------
[sig-scheduling] LimitRange 
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:15:06.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename limitrange
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a LimitRange
STEP: Setting up watch
STEP: Submitting a LimitRange
Jul 27 11:15:06.574: INFO: observed the limitRanges list
STEP: Verifying LimitRange creation was observed
STEP: Fetching the LimitRange to ensure it has proper values
Jul 27 11:15:06.578: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Jul 27 11:15:06.578: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with no resource requirements
STEP: Ensuring Pod has resource requirements applied from LimitRange
Jul 27 11:15:06.588: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {}  BinarySI} memory:{{209715200 0} {}  BinarySI}]
Jul 27 11:15:06.588: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Creating a Pod with partial resource requirements
STEP: Ensuring Pod has merged resource requirements applied from LimitRange
Jul 27 11:15:06.657: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}]
Jul 27 11:15:06.657: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}]
STEP: Failing to create a Pod with less than min resources
STEP: Failing to create a Pod with more than max resources
STEP: Updating a LimitRange
STEP: Verifying LimitRange updating is effective
STEP: Creating a Pod with less than former min resources
STEP: Failing to create a Pod with more than max resources
STEP: Deleting a LimitRange
STEP: Verifying the LimitRange was deleted
Jul 27 11:15:13.954: INFO: limitRange is already deleted
STEP: Creating a Pod with more than former max resources
[AfterEach] [sig-scheduling] LimitRange
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:15:13.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "limitrange-9349" for this suite.

• [SLOW TEST:7.502 seconds]
[sig-scheduling] LimitRange
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":159,"skipped":2977,"failed":0}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:15:14.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0727 11:15:54.657962       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 27 11:15:54.658: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:15:54.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1256" for this suite.

• [SLOW TEST:40.660 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":160,"skipped":2980,"failed":0}
SSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:15:54.667: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:15:54.765: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jul 27 11:15:56.891: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:15:58.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9511" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":161,"skipped":2983,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:15:58.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jul 27 11:15:58.932: INFO: Waiting up to 5m0s for pod "pod-475b865c-1088-40d4-b505-c306600c1652" in namespace "emptydir-1443" to be "Succeeded or Failed"
Jul 27 11:15:58.958: INFO: Pod "pod-475b865c-1088-40d4-b505-c306600c1652": Phase="Pending", Reason="", readiness=false. Elapsed: 26.11646ms
Jul 27 11:16:01.614: INFO: Pod "pod-475b865c-1088-40d4-b505-c306600c1652": Phase="Pending", Reason="", readiness=false. Elapsed: 2.681612144s
Jul 27 11:16:03.627: INFO: Pod "pod-475b865c-1088-40d4-b505-c306600c1652": Phase="Pending", Reason="", readiness=false. Elapsed: 4.694672779s
Jul 27 11:16:05.857: INFO: Pod "pod-475b865c-1088-40d4-b505-c306600c1652": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.924844393s
STEP: Saw pod success
Jul 27 11:16:05.857: INFO: Pod "pod-475b865c-1088-40d4-b505-c306600c1652" satisfied condition "Succeeded or Failed"
Jul 27 11:16:05.878: INFO: Trying to get logs from node kali-worker2 pod pod-475b865c-1088-40d4-b505-c306600c1652 container test-container: 
STEP: delete the pod
Jul 27 11:16:05.997: INFO: Waiting for pod pod-475b865c-1088-40d4-b505-c306600c1652 to disappear
Jul 27 11:16:06.046: INFO: Pod pod-475b865c-1088-40d4-b505-c306600c1652 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:16:06.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1443" for this suite.

• [SLOW TEST:8.277 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":162,"skipped":3029,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:16:06.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:16:18.261: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4911" for this suite.

• [SLOW TEST:11.609 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":163,"skipped":3051,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:16:18.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-1890
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-1890
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1890
Jul 27 11:16:18.391: INFO: Found 0 stateful pods, waiting for 1
Jul 27 11:16:28.395: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jul 27 11:16:28.399: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1890 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 27 11:16:28.658: INFO: stderr: "I0727 11:16:28.526885    1603 log.go:172] (0xc0009740b0) (0xc000914140) Create stream\nI0727 11:16:28.526959    1603 log.go:172] (0xc0009740b0) (0xc000914140) Stream added, broadcasting: 1\nI0727 11:16:28.530561    1603 log.go:172] (0xc0009740b0) Reply frame received for 1\nI0727 11:16:28.530632    1603 log.go:172] (0xc0009740b0) (0xc000914460) Create stream\nI0727 11:16:28.530664    1603 log.go:172] (0xc0009740b0) (0xc000914460) Stream added, broadcasting: 3\nI0727 11:16:28.531798    1603 log.go:172] (0xc0009740b0) Reply frame received for 3\nI0727 11:16:28.531842    1603 log.go:172] (0xc0009740b0) (0xc000914500) Create stream\nI0727 11:16:28.531854    1603 log.go:172] (0xc0009740b0) (0xc000914500) Stream added, broadcasting: 5\nI0727 11:16:28.532877    1603 log.go:172] (0xc0009740b0) Reply frame received for 5\nI0727 11:16:28.608868    1603 log.go:172] (0xc0009740b0) Data frame received for 5\nI0727 11:16:28.608898    1603 log.go:172] (0xc000914500) (5) Data frame handling\nI0727 11:16:28.608919    1603 log.go:172] (0xc000914500) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0727 11:16:28.649395    1603 log.go:172] (0xc0009740b0) Data frame received for 5\nI0727 11:16:28.649433    1603 log.go:172] (0xc000914500) (5) Data frame handling\nI0727 11:16:28.649465    1603 log.go:172] (0xc0009740b0) Data frame received for 3\nI0727 11:16:28.649485    1603 log.go:172] (0xc000914460) (3) Data frame handling\nI0727 11:16:28.649513    1603 log.go:172] (0xc000914460) (3) Data frame sent\nI0727 11:16:28.649537    1603 log.go:172] (0xc0009740b0) Data frame received for 3\nI0727 11:16:28.649555    1603 log.go:172] (0xc000914460) (3) Data frame handling\nI0727 11:16:28.650978    1603 log.go:172] (0xc0009740b0) Data frame received for 1\nI0727 11:16:28.650994    1603 log.go:172] (0xc000914140) (1) Data frame handling\nI0727 11:16:28.651011    1603 log.go:172] (0xc000914140) (1) Data frame sent\nI0727 11:16:28.651110    1603 log.go:172] (0xc0009740b0) (0xc000914140) Stream removed, broadcasting: 1\nI0727 11:16:28.651132    1603 log.go:172] (0xc0009740b0) Go away received\nI0727 11:16:28.651577    1603 log.go:172] (0xc0009740b0) (0xc000914140) Stream removed, broadcasting: 1\nI0727 11:16:28.651605    1603 log.go:172] (0xc0009740b0) (0xc000914460) Stream removed, broadcasting: 3\nI0727 11:16:28.651618    1603 log.go:172] (0xc0009740b0) (0xc000914500) Stream removed, broadcasting: 5\n"
Jul 27 11:16:28.658: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 27 11:16:28.658: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 27 11:16:28.662: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul 27 11:16:38.667: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 27 11:16:38.667: INFO: Waiting for statefulset status.replicas updated to 0
Jul 27 11:16:38.699: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999535s
Jul 27 11:16:39.703: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.97856063s
Jul 27 11:16:40.708: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.97437628s
Jul 27 11:16:41.768: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.969750279s
Jul 27 11:16:42.773: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.909107637s
Jul 27 11:16:43.777: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.90423461s
Jul 27 11:16:44.782: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.90004355s
Jul 27 11:16:45.787: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.89528339s
Jul 27 11:16:46.793: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.890026759s
Jul 27 11:16:47.811: INFO: Verifying statefulset ss doesn't scale past 1 for another 884.256532ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1890
Jul 27 11:16:48.834: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1890 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:16:49.071: INFO: stderr: "I0727 11:16:48.971458    1623 log.go:172] (0xc0000e9970) (0xc0006115e0) Create stream\nI0727 11:16:48.971516    1623 log.go:172] (0xc0000e9970) (0xc0006115e0) Stream added, broadcasting: 1\nI0727 11:16:48.974352    1623 log.go:172] (0xc0000e9970) Reply frame received for 1\nI0727 11:16:48.974414    1623 log.go:172] (0xc0000e9970) (0xc000768000) Create stream\nI0727 11:16:48.974429    1623 log.go:172] (0xc0000e9970) (0xc000768000) Stream added, broadcasting: 3\nI0727 11:16:48.975637    1623 log.go:172] (0xc0000e9970) Reply frame received for 3\nI0727 11:16:48.975669    1623 log.go:172] (0xc0000e9970) (0xc00043e000) Create stream\nI0727 11:16:48.975688    1623 log.go:172] (0xc0000e9970) (0xc00043e000) Stream added, broadcasting: 5\nI0727 11:16:48.976595    1623 log.go:172] (0xc0000e9970) Reply frame received for 5\nI0727 11:16:49.063539    1623 log.go:172] (0xc0000e9970) Data frame received for 5\nI0727 11:16:49.063611    1623 log.go:172] (0xc00043e000) (5) Data frame handling\nI0727 11:16:49.063643    1623 log.go:172] (0xc00043e000) (5) Data frame sent\nI0727 11:16:49.063664    1623 log.go:172] (0xc0000e9970) Data frame received for 5\nI0727 11:16:49.063683    1623 log.go:172] (0xc00043e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0727 11:16:49.063730    1623 log.go:172] (0xc0000e9970) Data frame received for 3\nI0727 11:16:49.063791    1623 log.go:172] (0xc000768000) (3) Data frame handling\nI0727 11:16:49.063828    1623 log.go:172] (0xc000768000) (3) Data frame sent\nI0727 11:16:49.063854    1623 log.go:172] (0xc0000e9970) Data frame received for 3\nI0727 11:16:49.063870    1623 log.go:172] (0xc000768000) (3) Data frame handling\nI0727 11:16:49.065227    1623 log.go:172] (0xc0000e9970) Data frame received for 1\nI0727 11:16:49.065266    1623 log.go:172] (0xc0006115e0) (1) Data frame handling\nI0727 11:16:49.065324    1623 log.go:172] (0xc0006115e0) (1) Data frame sent\nI0727 11:16:49.065352    1623 log.go:172] (0xc0000e9970) (0xc0006115e0) Stream removed, broadcasting: 1\nI0727 11:16:49.065410    1623 log.go:172] (0xc0000e9970) Go away received\nI0727 11:16:49.065830    1623 log.go:172] (0xc0000e9970) (0xc0006115e0) Stream removed, broadcasting: 1\nI0727 11:16:49.065944    1623 log.go:172] (0xc0000e9970) (0xc000768000) Stream removed, broadcasting: 3\nI0727 11:16:49.065985    1623 log.go:172] (0xc0000e9970) (0xc00043e000) Stream removed, broadcasting: 5\n"
Jul 27 11:16:49.071: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 27 11:16:49.071: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 27 11:16:49.075: INFO: Found 1 stateful pods, waiting for 3
Jul 27 11:16:59.080: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 27 11:16:59.080: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 27 11:16:59.080: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jul 27 11:16:59.085: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1890 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 27 11:16:59.295: INFO: stderr: "I0727 11:16:59.219472    1647 log.go:172] (0xc000a78d10) (0xc000a665a0) Create stream\nI0727 11:16:59.219556    1647 log.go:172] (0xc000a78d10) (0xc000a665a0) Stream added, broadcasting: 1\nI0727 11:16:59.224039    1647 log.go:172] (0xc000a78d10) Reply frame received for 1\nI0727 11:16:59.224086    1647 log.go:172] (0xc000a78d10) (0xc000a720a0) Create stream\nI0727 11:16:59.224108    1647 log.go:172] (0xc000a78d10) (0xc000a720a0) Stream added, broadcasting: 3\nI0727 11:16:59.225197    1647 log.go:172] (0xc000a78d10) Reply frame received for 3\nI0727 11:16:59.225235    1647 log.go:172] (0xc000a78d10) (0xc0009e81e0) Create stream\nI0727 11:16:59.225256    1647 log.go:172] (0xc000a78d10) (0xc0009e81e0) Stream added, broadcasting: 5\nI0727 11:16:59.225959    1647 log.go:172] (0xc000a78d10) Reply frame received for 5\nI0727 11:16:59.287462    1647 log.go:172] (0xc000a78d10) Data frame received for 5\nI0727 11:16:59.287495    1647 log.go:172] (0xc0009e81e0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0727 11:16:59.287539    1647 log.go:172] (0xc000a78d10) Data frame received for 3\nI0727 11:16:59.287561    1647 log.go:172] (0xc000a720a0) (3) Data frame handling\nI0727 11:16:59.287578    1647 log.go:172] (0xc000a720a0) (3) Data frame sent\nI0727 11:16:59.287594    1647 log.go:172] (0xc000a78d10) Data frame received for 3\nI0727 11:16:59.287607    1647 log.go:172] (0xc000a720a0) (3) Data frame handling\nI0727 11:16:59.287620    1647 log.go:172] (0xc0009e81e0) (5) Data frame sent\nI0727 11:16:59.287631    1647 log.go:172] (0xc000a78d10) Data frame received for 5\nI0727 11:16:59.287640    1647 log.go:172] (0xc0009e81e0) (5) Data frame handling\nI0727 11:16:59.289091    1647 log.go:172] (0xc000a78d10) Data frame received for 1\nI0727 11:16:59.289126    1647 log.go:172] (0xc000a665a0) (1) Data frame handling\nI0727 11:16:59.289152    1647 log.go:172] (0xc000a665a0) (1) Data frame sent\nI0727 11:16:59.289181    1647 log.go:172] (0xc000a78d10) (0xc000a665a0) Stream removed, broadcasting: 1\nI0727 11:16:59.289209    1647 log.go:172] (0xc000a78d10) Go away received\nI0727 11:16:59.289672    1647 log.go:172] (0xc000a78d10) (0xc000a665a0) Stream removed, broadcasting: 1\nI0727 11:16:59.289703    1647 log.go:172] (0xc000a78d10) (0xc000a720a0) Stream removed, broadcasting: 3\nI0727 11:16:59.289721    1647 log.go:172] (0xc000a78d10) (0xc0009e81e0) Stream removed, broadcasting: 5\n"
Jul 27 11:16:59.295: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 27 11:16:59.295: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 27 11:16:59.295: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1890 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 27 11:16:59.589: INFO: stderr: "I0727 11:16:59.472193    1667 log.go:172] (0xc00003b1e0) (0xc000b740a0) Create stream\nI0727 11:16:59.472256    1667 log.go:172] (0xc00003b1e0) (0xc000b740a0) Stream added, broadcasting: 1\nI0727 11:16:59.475750    1667 log.go:172] (0xc00003b1e0) Reply frame received for 1\nI0727 11:16:59.475821    1667 log.go:172] (0xc00003b1e0) (0xc0009b0000) Create stream\nI0727 11:16:59.475849    1667 log.go:172] (0xc00003b1e0) (0xc0009b0000) Stream added, broadcasting: 3\nI0727 11:16:59.476865    1667 log.go:172] (0xc00003b1e0) Reply frame received for 3\nI0727 11:16:59.476930    1667 log.go:172] (0xc00003b1e0) (0xc000b74140) Create stream\nI0727 11:16:59.476953    1667 log.go:172] (0xc00003b1e0) (0xc000b74140) Stream added, broadcasting: 5\nI0727 11:16:59.478096    1667 log.go:172] (0xc00003b1e0) Reply frame received for 5\nI0727 11:16:59.546838    1667 log.go:172] (0xc00003b1e0) Data frame received for 5\nI0727 11:16:59.546864    1667 log.go:172] (0xc000b74140) (5) Data frame handling\nI0727 11:16:59.546883    1667 log.go:172] (0xc000b74140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0727 11:16:59.582424    1667 log.go:172] (0xc00003b1e0) Data frame received for 5\nI0727 11:16:59.582491    1667 log.go:172] (0xc000b74140) (5) Data frame handling\nI0727 11:16:59.582528    1667 log.go:172] (0xc00003b1e0) Data frame received for 3\nI0727 11:16:59.582549    1667 log.go:172] (0xc0009b0000) (3) Data frame handling\nI0727 11:16:59.582578    1667 log.go:172] (0xc0009b0000) (3) Data frame sent\nI0727 11:16:59.582599    1667 log.go:172] (0xc00003b1e0) Data frame received for 3\nI0727 11:16:59.582620    1667 log.go:172] (0xc0009b0000) (3) Data frame handling\nI0727 11:16:59.584176    1667 log.go:172] (0xc00003b1e0) Data frame received for 1\nI0727 11:16:59.584205    1667 log.go:172] (0xc000b740a0) (1) Data frame handling\nI0727 11:16:59.584215    1667 log.go:172] (0xc000b740a0) (1) Data frame sent\nI0727 11:16:59.584228    1667 log.go:172] (0xc00003b1e0) (0xc000b740a0) Stream removed, broadcasting: 1\nI0727 11:16:59.584243    1667 log.go:172] (0xc00003b1e0) Go away received\nI0727 11:16:59.584692    1667 log.go:172] (0xc00003b1e0) (0xc000b740a0) Stream removed, broadcasting: 1\nI0727 11:16:59.584719    1667 log.go:172] (0xc00003b1e0) (0xc0009b0000) Stream removed, broadcasting: 3\nI0727 11:16:59.584848    1667 log.go:172] (0xc00003b1e0) (0xc000b74140) Stream removed, broadcasting: 5\n"
Jul 27 11:16:59.589: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 27 11:16:59.589: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 27 11:16:59.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1890 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 27 11:16:59.835: INFO: stderr: "I0727 11:16:59.726559    1690 log.go:172] (0xc000af0fd0) (0xc000a90460) Create stream\nI0727 11:16:59.726622    1690 log.go:172] (0xc000af0fd0) (0xc000a90460) Stream added, broadcasting: 1\nI0727 11:16:59.729611    1690 log.go:172] (0xc000af0fd0) Reply frame received for 1\nI0727 11:16:59.729665    1690 log.go:172] (0xc000af0fd0) (0xc000a48000) Create stream\nI0727 11:16:59.729689    1690 log.go:172] (0xc000af0fd0) (0xc000a48000) Stream added, broadcasting: 3\nI0727 11:16:59.730809    1690 log.go:172] (0xc000af0fd0) Reply frame received for 3\nI0727 11:16:59.730855    1690 log.go:172] (0xc000af0fd0) (0xc000ad6140) Create stream\nI0727 11:16:59.730870    1690 log.go:172] (0xc000af0fd0) (0xc000ad6140) Stream added, broadcasting: 5\nI0727 11:16:59.731881    1690 log.go:172] (0xc000af0fd0) Reply frame received for 5\nI0727 11:16:59.794914    1690 log.go:172] (0xc000af0fd0) Data frame received for 5\nI0727 11:16:59.794936    1690 log.go:172] (0xc000ad6140) (5) Data frame handling\nI0727 11:16:59.794948    1690 log.go:172] (0xc000ad6140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0727 11:16:59.828385    1690 log.go:172] (0xc000af0fd0) Data frame received for 3\nI0727 11:16:59.828420    1690 log.go:172] (0xc000a48000) (3) Data frame handling\nI0727 11:16:59.828436    1690 log.go:172] (0xc000a48000) (3) Data frame sent\nI0727 11:16:59.828446    1690 log.go:172] (0xc000af0fd0) Data frame received for 3\nI0727 11:16:59.828468    1690 log.go:172] (0xc000af0fd0) Data frame received for 5\nI0727 11:16:59.828492    1690 log.go:172] (0xc000ad6140) (5) Data frame handling\nI0727 11:16:59.828512    1690 log.go:172] (0xc000a48000) (3) Data frame handling\nI0727 11:16:59.830648    1690 log.go:172] (0xc000af0fd0) Data frame received for 1\nI0727 11:16:59.830677    1690 log.go:172] (0xc000a90460) (1) Data frame handling\nI0727 11:16:59.830689    1690 log.go:172] (0xc000a90460) (1) Data frame sent\nI0727 11:16:59.830704    1690 log.go:172] (0xc000af0fd0) (0xc000a90460) Stream removed, broadcasting: 1\nI0727 11:16:59.830729    1690 log.go:172] (0xc000af0fd0) Go away received\nI0727 11:16:59.831146    1690 log.go:172] (0xc000af0fd0) (0xc000a90460) Stream removed, broadcasting: 1\nI0727 11:16:59.831166    1690 log.go:172] (0xc000af0fd0) (0xc000a48000) Stream removed, broadcasting: 3\nI0727 11:16:59.831191    1690 log.go:172] (0xc000af0fd0) (0xc000ad6140) Stream removed, broadcasting: 5\n"
Jul 27 11:16:59.835: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 27 11:16:59.835: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 27 11:16:59.835: INFO: Waiting for statefulset status.replicas updated to 0
Jul 27 11:16:59.875: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jul 27 11:17:09.883: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 27 11:17:09.883: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul 27 11:17:09.883: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul 27 11:17:09.894: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999396s
Jul 27 11:17:10.899: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996083266s
Jul 27 11:17:11.903: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.99133577s
Jul 27 11:17:12.931: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.986791784s
Jul 27 11:17:13.936: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.959541137s
Jul 27 11:17:14.941: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.954589779s
Jul 27 11:17:15.946: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.949682907s
Jul 27 11:17:17.003: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.944716207s
Jul 27 11:17:18.008: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.887481016s
Jul 27 11:17:19.128: INFO: Verifying statefulset ss doesn't scale past 3 for another 882.648303ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1890
Jul 27 11:17:20.235: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1890 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:17:20.444: INFO: stderr: "I0727 11:17:20.366350    1710 log.go:172] (0xc00092c790) (0xc00098a0a0) Create stream\nI0727 11:17:20.366407    1710 log.go:172] (0xc00092c790) (0xc00098a0a0) Stream added, broadcasting: 1\nI0727 11:17:20.368543    1710 log.go:172] (0xc00092c790) Reply frame received for 1\nI0727 11:17:20.368594    1710 log.go:172] (0xc00092c790) (0xc0007f1400) Create stream\nI0727 11:17:20.368604    1710 log.go:172] (0xc00092c790) (0xc0007f1400) Stream added, broadcasting: 3\nI0727 11:17:20.369563    1710 log.go:172] (0xc00092c790) Reply frame received for 3\nI0727 11:17:20.369600    1710 log.go:172] (0xc00092c790) (0xc00041a000) Create stream\nI0727 11:17:20.369610    1710 log.go:172] (0xc00092c790) (0xc00041a000) Stream added, broadcasting: 5\nI0727 11:17:20.370359    1710 log.go:172] (0xc00092c790) Reply frame received for 5\nI0727 11:17:20.436106    1710 log.go:172] (0xc00092c790) Data frame received for 3\nI0727 11:17:20.436155    1710 log.go:172] (0xc0007f1400) (3) Data frame handling\nI0727 11:17:20.436188    1710 log.go:172] (0xc0007f1400) (3) Data frame sent\nI0727 11:17:20.436211    1710 log.go:172] (0xc00092c790) Data frame received for 3\nI0727 11:17:20.436222    1710 log.go:172] (0xc0007f1400) (3) Data frame handling\nI0727 11:17:20.436264    1710 log.go:172] (0xc00092c790) Data frame received for 5\nI0727 11:17:20.436307    1710 log.go:172] (0xc00041a000) (5) Data frame handling\nI0727 11:17:20.436342    1710 log.go:172] (0xc00041a000) (5) Data frame sent\nI0727 11:17:20.436365    1710 log.go:172] (0xc00092c790) Data frame received for 5\nI0727 11:17:20.436388    1710 log.go:172] (0xc00041a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0727 11:17:20.437817    1710 log.go:172] (0xc00092c790) Data frame received for 1\nI0727 11:17:20.437846    1710 log.go:172] (0xc00098a0a0) (1) Data frame handling\nI0727 11:17:20.437872    1710 log.go:172] (0xc00098a0a0) (1) Data frame sent\nI0727 11:17:20.437889    1710 log.go:172] (0xc00092c790) (0xc00098a0a0) Stream removed, broadcasting: 1\nI0727 11:17:20.437986    1710 log.go:172] (0xc00092c790) Go away received\nI0727 11:17:20.438383    1710 log.go:172] (0xc00092c790) (0xc00098a0a0) Stream removed, broadcasting: 1\nI0727 11:17:20.438403    1710 log.go:172] (0xc00092c790) (0xc0007f1400) Stream removed, broadcasting: 3\nI0727 11:17:20.438416    1710 log.go:172] (0xc00092c790) (0xc00041a000) Stream removed, broadcasting: 5\n"
Jul 27 11:17:20.445: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 27 11:17:20.445: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 27 11:17:20.445: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1890 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:17:20.657: INFO: stderr: "I0727 11:17:20.577754    1730 log.go:172] (0xc000a02840) (0xc00097e460) Create stream\nI0727 11:17:20.577832    1730 log.go:172] (0xc000a02840) (0xc00097e460) Stream added, broadcasting: 1\nI0727 11:17:20.580824    1730 log.go:172] (0xc000a02840) Reply frame received for 1\nI0727 11:17:20.580857    1730 log.go:172] (0xc000a02840) (0xc0005615e0) Create stream\nI0727 11:17:20.580866    1730 log.go:172] (0xc000a02840) (0xc0005615e0) Stream added, broadcasting: 3\nI0727 11:17:20.581566    1730 log.go:172] (0xc000a02840) Reply frame received for 3\nI0727 11:17:20.581596    1730 log.go:172] (0xc000a02840) (0xc0003e4a00) Create stream\nI0727 11:17:20.581605    1730 log.go:172] (0xc000a02840) (0xc0003e4a00) Stream added, broadcasting: 5\nI0727 11:17:20.582292    1730 log.go:172] (0xc000a02840) Reply frame received for 5\nI0727 11:17:20.650861    1730 log.go:172] (0xc000a02840) Data frame received for 3\nI0727 11:17:20.650905    1730 log.go:172] (0xc0005615e0) (3) Data frame handling\nI0727 11:17:20.650931    1730 log.go:172] (0xc0005615e0) (3) Data frame sent\nI0727 11:17:20.650944    1730 log.go:172] (0xc000a02840) Data frame received for 3\nI0727 11:17:20.650954    1730 log.go:172] (0xc0005615e0) (3) Data frame handling\nI0727 11:17:20.651075    1730 log.go:172] (0xc000a02840) Data frame received for 5\nI0727 11:17:20.651089    1730 log.go:172] (0xc0003e4a00) (5) Data frame handling\nI0727 11:17:20.651099    1730 log.go:172] (0xc0003e4a00) (5) Data frame sent\nI0727 11:17:20.651106    1730 log.go:172] (0xc000a02840) Data frame received for 5\nI0727 11:17:20.651113    1730 log.go:172] (0xc0003e4a00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0727 11:17:20.652501    1730 log.go:172] (0xc000a02840) Data frame received for 1\nI0727 11:17:20.652521    1730 log.go:172] (0xc00097e460) (1) Data frame handling\nI0727 11:17:20.652544    1730 log.go:172] (0xc00097e460) (1) Data frame sent\nI0727 11:17:20.652569    1730 log.go:172] (0xc000a02840) (0xc00097e460) Stream removed, broadcasting: 1\nI0727 11:17:20.652634    1730 log.go:172] (0xc000a02840) Go away received\nI0727 11:17:20.652948    1730 log.go:172] (0xc000a02840) (0xc00097e460) Stream removed, broadcasting: 1\nI0727 11:17:20.652973    1730 log.go:172] (0xc000a02840) (0xc0005615e0) Stream removed, broadcasting: 3\nI0727 11:17:20.652980    1730 log.go:172] (0xc000a02840) (0xc0003e4a00) Stream removed, broadcasting: 5\n"
Jul 27 11:17:20.657: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 27 11:17:20.657: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 27 11:17:20.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-1890 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:17:20.841: INFO: stderr: "I0727 11:17:20.775810    1752 log.go:172] (0xc0005a6790) (0xc000412b40) Create stream\nI0727 11:17:20.775872    1752 log.go:172] (0xc0005a6790) (0xc000412b40) Stream added, broadcasting: 1\nI0727 11:17:20.779203    1752 log.go:172] (0xc0005a6790) Reply frame received for 1\nI0727 11:17:20.779262    1752 log.go:172] (0xc0005a6790) (0xc00068d540) Create stream\nI0727 11:17:20.779286    1752 log.go:172] (0xc0005a6790) (0xc00068d540) Stream added, broadcasting: 3\nI0727 11:17:20.780233    1752 log.go:172] (0xc0005a6790) Reply frame received for 3\nI0727 11:17:20.780278    1752 log.go:172] (0xc0005a6790) (0xc000ac8000) Create stream\nI0727 11:17:20.780300    1752 log.go:172] (0xc0005a6790) (0xc000ac8000) Stream added, broadcasting: 5\nI0727 11:17:20.781537    1752 log.go:172] (0xc0005a6790) Reply frame received for 5\nI0727 11:17:20.835894    1752 log.go:172] (0xc0005a6790) Data frame received for 5\nI0727 11:17:20.835927    1752 log.go:172] (0xc000ac8000) (5) Data frame handling\nI0727 11:17:20.835938    1752 log.go:172] (0xc000ac8000) (5) Data frame sent\nI0727 11:17:20.835945    1752 log.go:172] (0xc0005a6790) Data frame received for 5\nI0727 11:17:20.835952    1752 log.go:172] (0xc000ac8000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0727 11:17:20.835973    1752 log.go:172] (0xc0005a6790) Data frame received for 3\nI0727 11:17:20.835985    1752 log.go:172] (0xc00068d540) (3) Data frame handling\nI0727 11:17:20.835998    1752 log.go:172] (0xc00068d540) (3) Data frame sent\nI0727 11:17:20.836008    1752 log.go:172] (0xc0005a6790) Data frame received for 3\nI0727 11:17:20.836016    1752 log.go:172] (0xc00068d540) (3) Data frame handling\nI0727 11:17:20.836927    1752 log.go:172] (0xc0005a6790) Data frame received for 1\nI0727 11:17:20.836952    1752 log.go:172] (0xc000412b40) (1) Data frame handling\nI0727 11:17:20.836963    1752 log.go:172] (0xc000412b40) (1) Data frame sent\nI0727 11:17:20.837075    1752 log.go:172] (0xc0005a6790) (0xc000412b40) Stream removed, broadcasting: 1\nI0727 11:17:20.837141    1752 log.go:172] (0xc0005a6790) Go away received\nI0727 11:17:20.837411    1752 log.go:172] (0xc0005a6790) (0xc000412b40) Stream removed, broadcasting: 1\nI0727 11:17:20.837429    1752 log.go:172] (0xc0005a6790) (0xc00068d540) Stream removed, broadcasting: 3\nI0727 11:17:20.837435    1752 log.go:172] (0xc0005a6790) (0xc000ac8000) Stream removed, broadcasting: 5\n"
Jul 27 11:17:20.842: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 27 11:17:20.842: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 27 11:17:20.842: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 27 11:18:00.878: INFO: Deleting all statefulset in ns statefulset-1890
Jul 27 11:18:00.882: INFO: Scaling statefulset ss to 0
Jul 27 11:18:00.892: INFO: Waiting for statefulset status.replicas updated to 0
Jul 27 11:18:00.894: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:18:00.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1890" for this suite.

• [SLOW TEST:102.647 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":164,"skipped":3053,"failed":0}
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:18:00.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul 27 11:18:01.004: INFO: Waiting up to 5m0s for pod "downward-api-4b301434-b083-457d-9253-aadc098fba90" in namespace "downward-api-8282" to be "Succeeded or Failed"
Jul 27 11:18:01.016: INFO: Pod "downward-api-4b301434-b083-457d-9253-aadc098fba90": Phase="Pending", Reason="", readiness=false. Elapsed: 11.938112ms
Jul 27 11:18:03.019: INFO: Pod "downward-api-4b301434-b083-457d-9253-aadc098fba90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015502703s
Jul 27 11:18:05.023: INFO: Pod "downward-api-4b301434-b083-457d-9253-aadc098fba90": Phase="Running", Reason="", readiness=true. Elapsed: 4.019425799s
Jul 27 11:18:07.028: INFO: Pod "downward-api-4b301434-b083-457d-9253-aadc098fba90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.024150466s
STEP: Saw pod success
Jul 27 11:18:07.028: INFO: Pod "downward-api-4b301434-b083-457d-9253-aadc098fba90" satisfied condition "Succeeded or Failed"
Jul 27 11:18:07.098: INFO: Trying to get logs from node kali-worker pod downward-api-4b301434-b083-457d-9253-aadc098fba90 container dapi-container: 
STEP: delete the pod
Jul 27 11:18:07.140: INFO: Waiting for pod downward-api-4b301434-b083-457d-9253-aadc098fba90 to disappear
Jul 27 11:18:07.144: INFO: Pod downward-api-4b301434-b083-457d-9253-aadc098fba90 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:18:07.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8282" for this suite.

• [SLOW TEST:6.233 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":3053,"failed":0}
SS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:18:07.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:18:11.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1220" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":166,"skipped":3055,"failed":0}
SS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:18:11.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 27 11:18:11.414: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9745e476-4609-4b6f-944b-6aa75994d7aa" in namespace "projected-7821" to be "Succeeded or Failed"
Jul 27 11:18:11.434: INFO: Pod "downwardapi-volume-9745e476-4609-4b6f-944b-6aa75994d7aa": Phase="Pending", Reason="", readiness=false. Elapsed: 20.14396ms
Jul 27 11:18:13.437: INFO: Pod "downwardapi-volume-9745e476-4609-4b6f-944b-6aa75994d7aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023771224s
Jul 27 11:18:15.442: INFO: Pod "downwardapi-volume-9745e476-4609-4b6f-944b-6aa75994d7aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028472968s
STEP: Saw pod success
Jul 27 11:18:15.442: INFO: Pod "downwardapi-volume-9745e476-4609-4b6f-944b-6aa75994d7aa" satisfied condition "Succeeded or Failed"
Jul 27 11:18:15.445: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-9745e476-4609-4b6f-944b-6aa75994d7aa container client-container: 
STEP: delete the pod
Jul 27 11:18:15.461: INFO: Waiting for pod downwardapi-volume-9745e476-4609-4b6f-944b-6aa75994d7aa to disappear
Jul 27 11:18:15.465: INFO: Pod downwardapi-volume-9745e476-4609-4b6f-944b-6aa75994d7aa no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:18:15.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7821" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":167,"skipped":3057,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:18:15.494: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1980.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1980.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1980.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1980.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 27 11:18:21.609: INFO: DNS probes using dns-test-4769aa45-2d8f-441f-bdb1-2839f4db8b44 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1980.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1980.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1980.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1980.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 27 11:18:27.733: INFO: File wheezy_udp@dns-test-service-3.dns-1980.svc.cluster.local from pod  dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 27 11:18:27.736: INFO: File jessie_udp@dns-test-service-3.dns-1980.svc.cluster.local from pod  dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 27 11:18:27.736: INFO: Lookups using dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 failed for: [wheezy_udp@dns-test-service-3.dns-1980.svc.cluster.local jessie_udp@dns-test-service-3.dns-1980.svc.cluster.local]

Jul 27 11:18:32.741: INFO: File wheezy_udp@dns-test-service-3.dns-1980.svc.cluster.local from pod  dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 27 11:18:32.745: INFO: File jessie_udp@dns-test-service-3.dns-1980.svc.cluster.local from pod  dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 27 11:18:32.745: INFO: Lookups using dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 failed for: [wheezy_udp@dns-test-service-3.dns-1980.svc.cluster.local jessie_udp@dns-test-service-3.dns-1980.svc.cluster.local]

Jul 27 11:18:37.741: INFO: File wheezy_udp@dns-test-service-3.dns-1980.svc.cluster.local from pod  dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 27 11:18:37.746: INFO: File jessie_udp@dns-test-service-3.dns-1980.svc.cluster.local from pod  dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 27 11:18:37.746: INFO: Lookups using dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 failed for: [wheezy_udp@dns-test-service-3.dns-1980.svc.cluster.local jessie_udp@dns-test-service-3.dns-1980.svc.cluster.local]

Jul 27 11:18:42.741: INFO: File wheezy_udp@dns-test-service-3.dns-1980.svc.cluster.local from pod  dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 27 11:18:42.745: INFO: File jessie_udp@dns-test-service-3.dns-1980.svc.cluster.local from pod  dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 27 11:18:42.745: INFO: Lookups using dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 failed for: [wheezy_udp@dns-test-service-3.dns-1980.svc.cluster.local jessie_udp@dns-test-service-3.dns-1980.svc.cluster.local]

Jul 27 11:18:47.742: INFO: File wheezy_udp@dns-test-service-3.dns-1980.svc.cluster.local from pod  dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 27 11:18:47.746: INFO: File jessie_udp@dns-test-service-3.dns-1980.svc.cluster.local from pod  dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jul 27 11:18:47.746: INFO: Lookups using dns-1980/dns-test-47911536-1084-4483-907d-8aad4c4689e4 failed for: [wheezy_udp@dns-test-service-3.dns-1980.svc.cluster.local jessie_udp@dns-test-service-3.dns-1980.svc.cluster.local]

Jul 27 11:18:52.746: INFO: DNS probes using dns-test-47911536-1084-4483-907d-8aad4c4689e4 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1980.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1980.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1980.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1980.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 27 11:19:01.431: INFO: DNS probes using dns-test-64ae307a-8b01-4df8-bbfb-609d65f27cf1 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:19:01.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1980" for this suite.

• [SLOW TEST:46.045 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":168,"skipped":3069,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:19:01.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:19:02.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2076" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":275,"completed":169,"skipped":3109,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:19:02.076: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating pod
Jul 27 11:19:08.196: INFO: Pod pod-hostip-63187832-5ddc-49fb-aacb-994930b28ac4 has hostIP: 172.18.0.13
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:19:08.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5278" for this suite.

• [SLOW TEST:6.127 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":170,"skipped":3118,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:19:08.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jul 27 11:19:08.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3589'
Jul 27 11:19:08.398: INFO: stderr: ""
Jul 27 11:19:08.398: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jul 27 11:19:13.448: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3589 -o json'
Jul 27 11:19:13.663: INFO: stderr: ""
Jul 27 11:19:13.663: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-07-27T11:19:08Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-07-27T11:19:08Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.2.196\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-07-27T11:19:11Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-3589\",\n        \"resourceVersion\": \"4559911\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-3589/pods/e2e-test-httpd-pod\",\n        \"uid\": \"4d87ac57-2d5b-4010-82f7-95ac4e4552f6\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-45kg8\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-45kg8\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-45kg8\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-27T11:19:08Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-27T11:19:11Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-27T11:19:11Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-07-27T11:19:08Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://1f615fc5390cced7eab30c613adbe12786faa6798a4c95b92d8f0fd06f168a2c\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-07-27T11:19:10Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.13\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.196\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.2.196\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-07-27T11:19:08Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jul 27 11:19:13.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3589'
Jul 27 11:19:14.031: INFO: stderr: ""
Jul 27 11:19:14.031: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Jul 27 11:19:14.171: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3589'
Jul 27 11:19:17.568: INFO: stderr: ""
Jul 27 11:19:17.568: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:19:17.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3589" for this suite.

• [SLOW TEST:9.453 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":275,"completed":171,"skipped":3130,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:19:17.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jul 27 11:19:17.825: INFO: Created pod &Pod{ObjectMeta:{dns-5796  dns-5796 /api/v1/namespaces/dns-5796/pods/dns-5796 f660d89c-18f6-4023-88aa-f2b54475a487 4559956 0 2020-07-27 11:19:17 +0000 UTC   map[] map[] [] []  [{e2e.test Update v1 2020-07-27 11:19:17 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-tsw8c,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-tsw8c,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-tsw8c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:19:17.843: INFO: The status of Pod dns-5796 is Pending, waiting for it to be Running (with Ready = true)
Jul 27 11:19:19.847: INFO: The status of Pod dns-5796 is Pending, waiting for it to be Running (with Ready = true)
Jul 27 11:19:21.846: INFO: The status of Pod dns-5796 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Jul 27 11:19:21.846: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-5796 PodName:dns-5796 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 27 11:19:21.846: INFO: >>> kubeConfig: /root/.kube/config
I0727 11:19:21.878180       7 log.go:172] (0xc0025693f0) (0xc0012ef180) Create stream
I0727 11:19:21.878207       7 log.go:172] (0xc0025693f0) (0xc0012ef180) Stream added, broadcasting: 1
I0727 11:19:21.880229       7 log.go:172] (0xc0025693f0) Reply frame received for 1
I0727 11:19:21.880292       7 log.go:172] (0xc0025693f0) (0xc0015a8460) Create stream
I0727 11:19:21.880308       7 log.go:172] (0xc0025693f0) (0xc0015a8460) Stream added, broadcasting: 3
I0727 11:19:21.881419       7 log.go:172] (0xc0025693f0) Reply frame received for 3
I0727 11:19:21.881480       7 log.go:172] (0xc0025693f0) (0xc0012ef2c0) Create stream
I0727 11:19:21.881507       7 log.go:172] (0xc0025693f0) (0xc0012ef2c0) Stream added, broadcasting: 5
I0727 11:19:21.882684       7 log.go:172] (0xc0025693f0) Reply frame received for 5
I0727 11:19:21.966946       7 log.go:172] (0xc0025693f0) Data frame received for 3
I0727 11:19:21.966981       7 log.go:172] (0xc0015a8460) (3) Data frame handling
I0727 11:19:21.967011       7 log.go:172] (0xc0015a8460) (3) Data frame sent
I0727 11:19:21.967790       7 log.go:172] (0xc0025693f0) Data frame received for 5
I0727 11:19:21.967815       7 log.go:172] (0xc0012ef2c0) (5) Data frame handling
I0727 11:19:21.967866       7 log.go:172] (0xc0025693f0) Data frame received for 3
I0727 11:19:21.967885       7 log.go:172] (0xc0015a8460) (3) Data frame handling
I0727 11:19:21.969951       7 log.go:172] (0xc0025693f0) Data frame received for 1
I0727 11:19:21.969976       7 log.go:172] (0xc0012ef180) (1) Data frame handling
I0727 11:19:21.969989       7 log.go:172] (0xc0012ef180) (1) Data frame sent
I0727 11:19:21.970009       7 log.go:172] (0xc0025693f0) (0xc0012ef180) Stream removed, broadcasting: 1
I0727 11:19:21.970044       7 log.go:172] (0xc0025693f0) Go away received
I0727 11:19:21.970184       7 log.go:172] (0xc0025693f0) (0xc0012ef180) Stream removed, broadcasting: 1
I0727 11:19:21.970221       7 log.go:172] (0xc0025693f0) (0xc0015a8460) Stream removed, broadcasting: 3
I0727 11:19:21.970233       7 log.go:172] (0xc0025693f0) (0xc0012ef2c0) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jul 27 11:19:21.970: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-5796 PodName:dns-5796 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 27 11:19:21.970: INFO: >>> kubeConfig: /root/.kube/config
I0727 11:19:22.006360       7 log.go:172] (0xc005e7e630) (0xc001262fa0) Create stream
I0727 11:19:22.006389       7 log.go:172] (0xc005e7e630) (0xc001262fa0) Stream added, broadcasting: 1
I0727 11:19:22.009059       7 log.go:172] (0xc005e7e630) Reply frame received for 1
I0727 11:19:22.009122       7 log.go:172] (0xc005e7e630) (0xc000f7ca00) Create stream
I0727 11:19:22.009148       7 log.go:172] (0xc005e7e630) (0xc000f7ca00) Stream added, broadcasting: 3
I0727 11:19:22.010668       7 log.go:172] (0xc005e7e630) Reply frame received for 3
I0727 11:19:22.010710       7 log.go:172] (0xc005e7e630) (0xc0015a86e0) Create stream
I0727 11:19:22.010723       7 log.go:172] (0xc005e7e630) (0xc0015a86e0) Stream added, broadcasting: 5
I0727 11:19:22.012192       7 log.go:172] (0xc005e7e630) Reply frame received for 5
I0727 11:19:22.074444       7 log.go:172] (0xc005e7e630) Data frame received for 3
I0727 11:19:22.074472       7 log.go:172] (0xc000f7ca00) (3) Data frame handling
I0727 11:19:22.074494       7 log.go:172] (0xc000f7ca00) (3) Data frame sent
I0727 11:19:22.075577       7 log.go:172] (0xc005e7e630) Data frame received for 5
I0727 11:19:22.075609       7 log.go:172] (0xc0015a86e0) (5) Data frame handling
I0727 11:19:22.075649       7 log.go:172] (0xc005e7e630) Data frame received for 3
I0727 11:19:22.075680       7 log.go:172] (0xc000f7ca00) (3) Data frame handling
I0727 11:19:22.076946       7 log.go:172] (0xc005e7e630) Data frame received for 1
I0727 11:19:22.076983       7 log.go:172] (0xc001262fa0) (1) Data frame handling
I0727 11:19:22.077011       7 log.go:172] (0xc001262fa0) (1) Data frame sent
I0727 11:19:22.077057       7 log.go:172] (0xc005e7e630) (0xc001262fa0) Stream removed, broadcasting: 1
I0727 11:19:22.077134       7 log.go:172] (0xc005e7e630) Go away received
I0727 11:19:22.077173       7 log.go:172] (0xc005e7e630) (0xc001262fa0) Stream removed, broadcasting: 1
I0727 11:19:22.077195       7 log.go:172] (0xc005e7e630) (0xc000f7ca00) Stream removed, broadcasting: 3
I0727 11:19:22.077218       7 log.go:172] (0xc005e7e630) (0xc0015a86e0) Stream removed, broadcasting: 5
Jul 27 11:19:22.077: INFO: Deleting pod dns-5796...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:19:22.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5796" for this suite.
•{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":172,"skipped":3143,"failed":0}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:19:22.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jul 27 11:19:23.415: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7905 /api/v1/namespaces/watch-7905/configmaps/e2e-watch-test-watch-closed 0e651b72-70e1-4219-bbcb-f8cc88b49f52 4559991 0 2020-07-27 11:19:23 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-07-27 11:19:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 27 11:19:23.415: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7905 /api/v1/namespaces/watch-7905/configmaps/e2e-watch-test-watch-closed 0e651b72-70e1-4219-bbcb-f8cc88b49f52 4559992 0 2020-07-27 11:19:23 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-07-27 11:19:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jul 27 11:19:23.465: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7905 /api/v1/namespaces/watch-7905/configmaps/e2e-watch-test-watch-closed 0e651b72-70e1-4219-bbcb-f8cc88b49f52 4559994 0 2020-07-27 11:19:23 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-07-27 11:19:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Jul 27 11:19:23.466: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7905 /api/v1/namespaces/watch-7905/configmaps/e2e-watch-test-watch-closed 0e651b72-70e1-4219-bbcb-f8cc88b49f52 4559997 0 2020-07-27 11:19:23 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  [{e2e.test Update v1 2020-07-27 11:19:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:19:23.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7905" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":173,"skipped":3146,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:19:23.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:19:23.646: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:19:24.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8433" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":174,"skipped":3206,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:19:24.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 27 11:19:25.427: INFO: new replicaset for deployment "sample-webhook-deployment" is yet to be created
Jul 27 11:19:27.463: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445565, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445565, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445565, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445565, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 27 11:19:30.514: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:19:30.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9442" for this suite.
STEP: Destroying namespace "webhook-9442-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.107 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":175,"skipped":3208,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:19:30.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:20:02.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3291" for this suite.
STEP: Destroying namespace "nsdeletetest-3949" for this suite.
Jul 27 11:20:02.019: INFO: Namespace nsdeletetest-3949 was already deleted
STEP: Destroying namespace "nsdeletetest-7263" for this suite.

• [SLOW TEST:31.223 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":176,"skipped":3215,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:20:02.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3638
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Jul 27 11:20:02.210: INFO: Found 0 stateful pods, waiting for 3
Jul 27 11:20:12.214: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 27 11:20:12.214: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 27 11:20:12.215: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jul 27 11:20:22.214: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 27 11:20:22.214: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 27 11:20:22.214: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jul 27 11:20:22.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3638 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 27 11:20:22.450: INFO: stderr: "I0727 11:20:22.344307    1856 log.go:172] (0xc000538f20) (0xc0007f9860) Create stream\nI0727 11:20:22.344370    1856 log.go:172] (0xc000538f20) (0xc0007f9860) Stream added, broadcasting: 1\nI0727 11:20:22.347480    1856 log.go:172] (0xc000538f20) Reply frame received for 1\nI0727 11:20:22.347547    1856 log.go:172] (0xc000538f20) (0xc000968000) Create stream\nI0727 11:20:22.347600    1856 log.go:172] (0xc000538f20) (0xc000968000) Stream added, broadcasting: 3\nI0727 11:20:22.348632    1856 log.go:172] (0xc000538f20) Reply frame received for 3\nI0727 11:20:22.348679    1856 log.go:172] (0xc000538f20) (0xc0009680a0) Create stream\nI0727 11:20:22.348694    1856 log.go:172] (0xc000538f20) (0xc0009680a0) Stream added, broadcasting: 5\nI0727 11:20:22.349575    1856 log.go:172] (0xc000538f20) Reply frame received for 5\nI0727 11:20:22.412690    1856 log.go:172] (0xc000538f20) Data frame received for 5\nI0727 11:20:22.412860    1856 log.go:172] (0xc0009680a0) (5) Data frame handling\nI0727 11:20:22.412907    1856 log.go:172] (0xc0009680a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0727 11:20:22.444377    1856 log.go:172] (0xc000538f20) Data frame received for 3\nI0727 11:20:22.444410    1856 log.go:172] (0xc000968000) (3) Data frame handling\nI0727 11:20:22.444432    1856 log.go:172] (0xc000968000) (3) Data frame sent\nI0727 11:20:22.444554    1856 log.go:172] (0xc000538f20) Data frame received for 3\nI0727 11:20:22.444579    1856 log.go:172] (0xc000968000) (3) Data frame handling\nI0727 11:20:22.445015    1856 log.go:172] (0xc000538f20) Data frame received for 5\nI0727 11:20:22.445047    1856 log.go:172] (0xc0009680a0) (5) Data frame handling\nI0727 11:20:22.446950    1856 log.go:172] (0xc000538f20) Data frame received for 1\nI0727 11:20:22.446986    1856 log.go:172] (0xc0007f9860) (1) Data frame handling\nI0727 11:20:22.447028    1856 log.go:172] (0xc0007f9860) (1) Data frame sent\nI0727 11:20:22.447071    1856 log.go:172] (0xc000538f20) (0xc0007f9860) Stream removed, broadcasting: 1\nI0727 11:20:22.447109    1856 log.go:172] (0xc000538f20) Go away received\nI0727 11:20:22.447329    1856 log.go:172] (0xc000538f20) (0xc0007f9860) Stream removed, broadcasting: 1\nI0727 11:20:22.447352    1856 log.go:172] (0xc000538f20) (0xc000968000) Stream removed, broadcasting: 3\nI0727 11:20:22.447366    1856 log.go:172] (0xc000538f20) (0xc0009680a0) Stream removed, broadcasting: 5\n"
Jul 27 11:20:22.451: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 27 11:20:22.451: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jul 27 11:20:32.486: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jul 27 11:20:42.521: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3638 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:20:42.740: INFO: stderr: "I0727 11:20:42.667038    1872 log.go:172] (0xc0000ea630) (0xc00044ca00) Create stream\nI0727 11:20:42.667113    1872 log.go:172] (0xc0000ea630) (0xc00044ca00) Stream added, broadcasting: 1\nI0727 11:20:42.673195    1872 log.go:172] (0xc0000ea630) Reply frame received for 1\nI0727 11:20:42.673266    1872 log.go:172] (0xc0000ea630) (0xc0008fa000) Create stream\nI0727 11:20:42.673325    1872 log.go:172] (0xc0000ea630) (0xc0008fa000) Stream added, broadcasting: 3\nI0727 11:20:42.674472    1872 log.go:172] (0xc0000ea630) Reply frame received for 3\nI0727 11:20:42.674522    1872 log.go:172] (0xc0000ea630) (0xc0009e8000) Create stream\nI0727 11:20:42.674547    1872 log.go:172] (0xc0000ea630) (0xc0009e8000) Stream added, broadcasting: 5\nI0727 11:20:42.675547    1872 log.go:172] (0xc0000ea630) Reply frame received for 5\nI0727 11:20:42.731591    1872 log.go:172] (0xc0000ea630) Data frame received for 5\nI0727 11:20:42.731752    1872 log.go:172] (0xc0009e8000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0727 11:20:42.731818    1872 log.go:172] (0xc0000ea630) Data frame received for 3\nI0727 11:20:42.731845    1872 log.go:172] (0xc0008fa000) (3) Data frame handling\nI0727 11:20:42.731863    1872 log.go:172] (0xc0008fa000) (3) Data frame sent\nI0727 11:20:42.731875    1872 log.go:172] (0xc0000ea630) Data frame received for 3\nI0727 11:20:42.731889    1872 log.go:172] (0xc0008fa000) (3) Data frame handling\nI0727 11:20:42.731952    1872 log.go:172] (0xc0009e8000) (5) Data frame sent\nI0727 11:20:42.731984    1872 log.go:172] (0xc0000ea630) Data frame received for 5\nI0727 11:20:42.732013    1872 log.go:172] (0xc0009e8000) (5) Data frame handling\nI0727 11:20:42.733683    1872 log.go:172] (0xc0000ea630) Data frame received for 1\nI0727 11:20:42.733708    1872 log.go:172] (0xc00044ca00) (1) Data frame handling\nI0727 11:20:42.733730    1872 log.go:172] (0xc00044ca00) (1) Data frame sent\nI0727 11:20:42.733751    1872 log.go:172] (0xc0000ea630) (0xc00044ca00) Stream removed, broadcasting: 1\nI0727 11:20:42.733780    1872 log.go:172] (0xc0000ea630) Go away received\nI0727 11:20:42.734235    1872 log.go:172] (0xc0000ea630) (0xc00044ca00) Stream removed, broadcasting: 1\nI0727 11:20:42.734259    1872 log.go:172] (0xc0000ea630) (0xc0008fa000) Stream removed, broadcasting: 3\nI0727 11:20:42.734270    1872 log.go:172] (0xc0000ea630) (0xc0009e8000) Stream removed, broadcasting: 5\n"
Jul 27 11:20:42.741: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 27 11:20:42.741: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

STEP: Rolling back to a previous revision
Jul 27 11:21:02.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3638 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 27 11:21:03.000: INFO: stderr: "I0727 11:21:02.889255    1895 log.go:172] (0xc0009cc160) (0xc000821540) Create stream\nI0727 11:21:02.889319    1895 log.go:172] (0xc0009cc160) (0xc000821540) Stream added, broadcasting: 1\nI0727 11:21:02.891787    1895 log.go:172] (0xc0009cc160) Reply frame received for 1\nI0727 11:21:02.891822    1895 log.go:172] (0xc0009cc160) (0xc0007db540) Create stream\nI0727 11:21:02.891831    1895 log.go:172] (0xc0009cc160) (0xc0007db540) Stream added, broadcasting: 3\nI0727 11:21:02.892892    1895 log.go:172] (0xc0009cc160) Reply frame received for 3\nI0727 11:21:02.892930    1895 log.go:172] (0xc0009cc160) (0xc0008215e0) Create stream\nI0727 11:21:02.892944    1895 log.go:172] (0xc0009cc160) (0xc0008215e0) Stream added, broadcasting: 5\nI0727 11:21:02.893799    1895 log.go:172] (0xc0009cc160) Reply frame received for 5\nI0727 11:21:02.960078    1895 log.go:172] (0xc0009cc160) Data frame received for 5\nI0727 11:21:02.960106    1895 log.go:172] (0xc0008215e0) (5) Data frame handling\nI0727 11:21:02.960123    1895 log.go:172] (0xc0008215e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0727 11:21:02.991748    1895 log.go:172] (0xc0009cc160) Data frame received for 3\nI0727 11:21:02.991796    1895 log.go:172] (0xc0007db540) (3) Data frame handling\nI0727 11:21:02.991834    1895 log.go:172] (0xc0007db540) (3) Data frame sent\nI0727 11:21:02.991871    1895 log.go:172] (0xc0009cc160) Data frame received for 5\nI0727 11:21:02.991893    1895 log.go:172] (0xc0008215e0) (5) Data frame handling\nI0727 11:21:02.992223    1895 log.go:172] (0xc0009cc160) Data frame received for 3\nI0727 11:21:02.992243    1895 log.go:172] (0xc0007db540) (3) Data frame handling\nI0727 11:21:02.994055    1895 log.go:172] (0xc0009cc160) Data frame received for 1\nI0727 11:21:02.994072    1895 log.go:172] (0xc000821540) (1) Data frame handling\nI0727 11:21:02.994094    1895 log.go:172] (0xc000821540) (1) Data frame sent\nI0727 11:21:02.994118    1895 log.go:172] (0xc0009cc160) (0xc000821540) Stream removed, broadcasting: 1\nI0727 11:21:02.994133    1895 log.go:172] (0xc0009cc160) Go away received\nI0727 11:21:02.994617    1895 log.go:172] (0xc0009cc160) (0xc000821540) Stream removed, broadcasting: 1\nI0727 11:21:02.994640    1895 log.go:172] (0xc0009cc160) (0xc0007db540) Stream removed, broadcasting: 3\nI0727 11:21:02.994653    1895 log.go:172] (0xc0009cc160) (0xc0008215e0) Stream removed, broadcasting: 5\n"
Jul 27 11:21:03.000: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 27 11:21:03.000: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 27 11:21:13.034: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jul 27 11:21:23.435: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3638 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:21:23.644: INFO: stderr: "I0727 11:21:23.559367    1916 log.go:172] (0xc00003a210) (0xc00068b400) Create stream\nI0727 11:21:23.559425    1916 log.go:172] (0xc00003a210) (0xc00068b400) Stream added, broadcasting: 1\nI0727 11:21:23.561942    1916 log.go:172] (0xc00003a210) Reply frame received for 1\nI0727 11:21:23.562033    1916 log.go:172] (0xc00003a210) (0xc000956000) Create stream\nI0727 11:21:23.562064    1916 log.go:172] (0xc00003a210) (0xc000956000) Stream added, broadcasting: 3\nI0727 11:21:23.563309    1916 log.go:172] (0xc00003a210) Reply frame received for 3\nI0727 11:21:23.563346    1916 log.go:172] (0xc00003a210) (0xc00068b4a0) Create stream\nI0727 11:21:23.563359    1916 log.go:172] (0xc00003a210) (0xc00068b4a0) Stream added, broadcasting: 5\nI0727 11:21:23.564651    1916 log.go:172] (0xc00003a210) Reply frame received for 5\nI0727 11:21:23.634882    1916 log.go:172] (0xc00003a210) Data frame received for 5\nI0727 11:21:23.634915    1916 log.go:172] (0xc00068b4a0) (5) Data frame handling\nI0727 11:21:23.634926    1916 log.go:172] (0xc00068b4a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0727 11:21:23.634959    1916 log.go:172] (0xc00003a210) Data frame received for 3\nI0727 11:21:23.634996    1916 log.go:172] (0xc000956000) (3) Data frame handling\nI0727 11:21:23.635029    1916 log.go:172] (0xc000956000) (3) Data frame sent\nI0727 11:21:23.635053    1916 log.go:172] (0xc00003a210) Data frame received for 5\nI0727 11:21:23.635086    1916 log.go:172] (0xc00068b4a0) (5) Data frame handling\nI0727 11:21:23.635124    1916 log.go:172] (0xc00003a210) Data frame received for 3\nI0727 11:21:23.635173    1916 log.go:172] (0xc000956000) (3) Data frame handling\nI0727 11:21:23.637149    1916 log.go:172] (0xc00003a210) Data frame received for 1\nI0727 11:21:23.637164    1916 log.go:172] (0xc00068b400) (1) Data frame handling\nI0727 11:21:23.637179    1916 log.go:172] (0xc00068b400) (1) Data frame sent\nI0727 11:21:23.637189    1916 log.go:172] (0xc00003a210) (0xc00068b400) Stream removed, broadcasting: 1\nI0727 11:21:23.637347    1916 log.go:172] (0xc00003a210) Go away received\nI0727 11:21:23.637542    1916 log.go:172] (0xc00003a210) (0xc00068b400) Stream removed, broadcasting: 1\nI0727 11:21:23.637565    1916 log.go:172] (0xc00003a210) (0xc000956000) Stream removed, broadcasting: 3\nI0727 11:21:23.637581    1916 log.go:172] (0xc00003a210) (0xc00068b4a0) Stream removed, broadcasting: 5\n"
Jul 27 11:21:23.644: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 27 11:21:23.644: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 27 11:21:33.666: INFO: Waiting for StatefulSet statefulset-3638/ss2 to complete update
Jul 27 11:21:33.666: INFO: Waiting for Pod statefulset-3638/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul 27 11:21:33.666: INFO: Waiting for Pod statefulset-3638/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Jul 27 11:21:43.674: INFO: Waiting for StatefulSet statefulset-3638/ss2 to complete update
Jul 27 11:21:43.674: INFO: Waiting for Pod statefulset-3638/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 27 11:21:53.675: INFO: Deleting all statefulset in ns statefulset-3638
Jul 27 11:21:53.678: INFO: Scaling statefulset ss2 to 0
Jul 27 11:22:13.739: INFO: Waiting for statefulset status.replicas updated to 0
Jul 27 11:22:13.741: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:22:13.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3638" for this suite.

• [SLOW TEST:131.756 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":177,"skipped":3217,"failed":0}
SSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:22:13.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:22:13.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-8691" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":178,"skipped":3220,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:22:13.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-45e8dea1-2c10-46d1-9902-10234bc97a9a
STEP: Creating a pod to test consume configMaps
Jul 27 11:22:14.115: INFO: Waiting up to 5m0s for pod "pod-configmaps-d126798e-8716-4fbd-90d5-1e9da495a9a5" in namespace "configmap-7977" to be "Succeeded or Failed"
Jul 27 11:22:14.118: INFO: Pod "pod-configmaps-d126798e-8716-4fbd-90d5-1e9da495a9a5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.328289ms
Jul 27 11:22:16.122: INFO: Pod "pod-configmaps-d126798e-8716-4fbd-90d5-1e9da495a9a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007252126s
Jul 27 11:22:18.127: INFO: Pod "pod-configmaps-d126798e-8716-4fbd-90d5-1e9da495a9a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011500623s
STEP: Saw pod success
Jul 27 11:22:18.127: INFO: Pod "pod-configmaps-d126798e-8716-4fbd-90d5-1e9da495a9a5" satisfied condition "Succeeded or Failed"
Jul 27 11:22:18.129: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-d126798e-8716-4fbd-90d5-1e9da495a9a5 container configmap-volume-test: 
STEP: delete the pod
Jul 27 11:22:18.239: INFO: Waiting for pod pod-configmaps-d126798e-8716-4fbd-90d5-1e9da495a9a5 to disappear
Jul 27 11:22:18.251: INFO: Pod pod-configmaps-d126798e-8716-4fbd-90d5-1e9da495a9a5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:22:18.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7977" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":179,"skipped":3225,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:22:18.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:22:18.313: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jul 27 11:22:18.328: INFO: Pod name sample-pod: Found 0 pods out of 1
Jul 27 11:22:23.340: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jul 27 11:22:23.340: INFO: Creating deployment "test-rolling-update-deployment"
Jul 27 11:22:23.351: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jul 27 11:22:23.423: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jul 27 11:22:25.535: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jul 27 11:22:25.538: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445743, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445743, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445743, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731445743, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 27 11:22:27.542: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jul 27 11:22:27.550: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-9521 /apis/apps/v1/namespaces/deployment-9521/deployments/test-rolling-update-deployment 17a8d3f8-0a16-4b46-9056-f7b2d010216b 4561182 1 2020-07-27 11:22:23 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-07-27 11:22:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-27 11:22:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00082e158  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-07-27 11:22:23 +0000 UTC,LastTransitionTime:2020-07-27 11:22:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-07-27 11:22:27 +0000 UTC,LastTransitionTime:2020-07-27 11:22:23 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jul 27 11:22:27.554: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-9521 /apis/apps/v1/namespaces/deployment-9521/replicasets/test-rolling-update-deployment-59d5cb45c7 ab2b89f9-857b-4b9e-b540-d6401cb923ec 4561170 1 2020-07-27 11:22:23 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 17a8d3f8-0a16-4b46-9056-f7b2d010216b 0xc00082f6e7 0xc00082f6e8}] []  [{kube-controller-manager Update apps/v1 2020-07-27 11:22:26 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 55 97 56 100 51 102 56 45 48 97 49 54 45 52 98 52 54 45 57 48 53 54 45 102 55 98 50 100 48 49 48 50 49 54 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00082fdf8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jul 27 11:22:27.554: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jul 27 11:22:27.554: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-9521 /apis/apps/v1/namespaces/deployment-9521/replicasets/test-rolling-update-controller 80f94d33-a0d0-4c82-a598-3c3c61fccd5a 4561181 2 2020-07-27 11:22:18 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 17a8d3f8-0a16-4b46-9056-f7b2d010216b 0xc00082f3a7 0xc00082f3a8}] []  [{e2e.test Update apps/v1 2020-07-27 11:22:18 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-27 11:22:27 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 55 97 56 100 51 102 56 45 48 97 49 54 45 52 98 52 54 45 57 48 53 54 45 102 55 98 50 100 48 49 48 50 49 54 98 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00082f5f8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 27 11:22:27.557: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-2qn9t" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-2qn9t test-rolling-update-deployment-59d5cb45c7- deployment-9521 /api/v1/namespaces/deployment-9521/pods/test-rolling-update-deployment-59d5cb45c7-2qn9t 9eed383d-72df-40e6-af6c-bc86272deea6 4561169 0 2020-07-27 11:22:23 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 ab2b89f9-857b-4b9e-b540-d6401cb923ec 0xc003437a07 0xc003437a08}] []  [{kube-controller-manager Update v1 2020-07-27 11:22:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 98 50 98 56 57 102 57 45 56 53 55 98 45 52 98 57 101 45 98 53 52 48 45 100 54 52 48 49 99 98 57 50 51 101 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:22:26 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 48 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qnndq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qnndq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qnndq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:22:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:22:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:22:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:22:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.204,StartTime:2020-07-27 11:22:23 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-27 11:22:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://d57efe2cac0638905f2b75fdc4a20753f18b5fff7815d9d95076d8b588bfae28,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.204,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:22:27.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-9521" for this suite.

• [SLOW TEST:9.305 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":180,"skipped":3233,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:22:27.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-5c7cf942-4450-4488-9ca1-e4663cd34f1a
STEP: Creating a pod to test consume configMaps
Jul 27 11:22:27.877: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f3cf5759-bc29-4901-8f7f-89f2c11bffca" in namespace "projected-3599" to be "Succeeded or Failed"
Jul 27 11:22:27.898: INFO: Pod "pod-projected-configmaps-f3cf5759-bc29-4901-8f7f-89f2c11bffca": Phase="Pending", Reason="", readiness=false. Elapsed: 21.719955ms
Jul 27 11:22:29.934: INFO: Pod "pod-projected-configmaps-f3cf5759-bc29-4901-8f7f-89f2c11bffca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057081244s
Jul 27 11:22:31.937: INFO: Pod "pod-projected-configmaps-f3cf5759-bc29-4901-8f7f-89f2c11bffca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.060908099s
STEP: Saw pod success
Jul 27 11:22:31.938: INFO: Pod "pod-projected-configmaps-f3cf5759-bc29-4901-8f7f-89f2c11bffca" satisfied condition "Succeeded or Failed"
Jul 27 11:22:31.940: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-f3cf5759-bc29-4901-8f7f-89f2c11bffca container projected-configmap-volume-test: 
STEP: delete the pod
Jul 27 11:22:32.000: INFO: Waiting for pod pod-projected-configmaps-f3cf5759-bc29-4901-8f7f-89f2c11bffca to disappear
Jul 27 11:22:32.016: INFO: Pod pod-projected-configmaps-f3cf5759-bc29-4901-8f7f-89f2c11bffca no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:22:32.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3599" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":3241,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:22:32.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:22:32.088: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jul 27 11:22:35.225: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7075 create -f -'
Jul 27 11:22:38.323: INFO: stderr: ""
Jul 27 11:22:38.323: INFO: stdout: "e2e-test-crd-publish-openapi-4685-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jul 27 11:22:38.323: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7075 delete e2e-test-crd-publish-openapi-4685-crds test-foo'
Jul 27 11:22:38.422: INFO: stderr: ""
Jul 27 11:22:38.422: INFO: stdout: "e2e-test-crd-publish-openapi-4685-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jul 27 11:22:38.422: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7075 apply -f -'
Jul 27 11:22:38.680: INFO: stderr: ""
Jul 27 11:22:38.680: INFO: stdout: "e2e-test-crd-publish-openapi-4685-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jul 27 11:22:38.680: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7075 delete e2e-test-crd-publish-openapi-4685-crds test-foo'
Jul 27 11:22:38.818: INFO: stderr: ""
Jul 27 11:22:38.818: INFO: stdout: "e2e-test-crd-publish-openapi-4685-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jul 27 11:22:38.818: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7075 create -f -'
Jul 27 11:22:39.046: INFO: rc: 1
Jul 27 11:22:39.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7075 apply -f -'
Jul 27 11:22:39.288: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jul 27 11:22:39.292: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7075 create -f -'
Jul 27 11:22:39.537: INFO: rc: 1
Jul 27 11:22:39.537: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7075 apply -f -'
Jul 27 11:22:39.758: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jul 27 11:22:39.758: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4685-crds'
Jul 27 11:22:40.004: INFO: stderr: ""
Jul 27 11:22:40.004: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4685-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jul 27 11:22:40.005: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4685-crds.metadata'
Jul 27 11:22:40.275: INFO: stderr: ""
Jul 27 11:22:40.275: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4685-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jul 27 11:22:40.276: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4685-crds.spec'
Jul 27 11:22:40.498: INFO: stderr: ""
Jul 27 11:22:40.499: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4685-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jul 27 11:22:40.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4685-crds.spec.bars'
Jul 27 11:22:40.775: INFO: stderr: ""
Jul 27 11:22:40.775: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4685-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jul 27 11:22:40.775: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4685-crds.spec.bars2'
Jul 27 11:22:41.019: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:22:42.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7075" for this suite.

• [SLOW TEST:10.940 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":182,"skipped":3276,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:22:42.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jul 27 11:22:47.588: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b5f9d556-364a-4f2b-898a-d6a0d9b328aa"
Jul 27 11:22:47.588: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b5f9d556-364a-4f2b-898a-d6a0d9b328aa" in namespace "pods-8478" to be "terminated due to deadline exceeded"
Jul 27 11:22:47.651: INFO: Pod "pod-update-activedeadlineseconds-b5f9d556-364a-4f2b-898a-d6a0d9b328aa": Phase="Running", Reason="", readiness=true. Elapsed: 62.262614ms
Jul 27 11:22:49.654: INFO: Pod "pod-update-activedeadlineseconds-b5f9d556-364a-4f2b-898a-d6a0d9b328aa": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.065844439s
Jul 27 11:22:49.654: INFO: Pod "pod-update-activedeadlineseconds-b5f9d556-364a-4f2b-898a-d6a0d9b328aa" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:22:49.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8478" for this suite.

• [SLOW TEST:6.703 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":183,"skipped":3309,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:22:49.670: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:22:49.998: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jul 27 11:22:50.250: INFO: Number of nodes with available pods: 0
Jul 27 11:22:50.250: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jul 27 11:22:50.412: INFO: Number of nodes with available pods: 0
Jul 27 11:22:50.412: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:22:51.429: INFO: Number of nodes with available pods: 0
Jul 27 11:22:51.429: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:22:52.416: INFO: Number of nodes with available pods: 0
Jul 27 11:22:52.416: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:22:53.424: INFO: Number of nodes with available pods: 0
Jul 27 11:22:53.424: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:22:54.416: INFO: Number of nodes with available pods: 1
Jul 27 11:22:54.416: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jul 27 11:22:54.470: INFO: Number of nodes with available pods: 1
Jul 27 11:22:54.470: INFO: Number of running nodes: 0, number of available pods: 1
Jul 27 11:22:55.475: INFO: Number of nodes with available pods: 0
Jul 27 11:22:55.475: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jul 27 11:22:55.523: INFO: Number of nodes with available pods: 0
Jul 27 11:22:55.523: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:22:56.527: INFO: Number of nodes with available pods: 0
Jul 27 11:22:56.527: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:22:57.543: INFO: Number of nodes with available pods: 0
Jul 27 11:22:57.543: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:22:58.527: INFO: Number of nodes with available pods: 0
Jul 27 11:22:58.527: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:22:59.527: INFO: Number of nodes with available pods: 0
Jul 27 11:22:59.527: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:23:00.527: INFO: Number of nodes with available pods: 0
Jul 27 11:23:00.527: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:23:01.527: INFO: Number of nodes with available pods: 0
Jul 27 11:23:01.527: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:23:02.528: INFO: Number of nodes with available pods: 0
Jul 27 11:23:02.528: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:23:03.527: INFO: Number of nodes with available pods: 0
Jul 27 11:23:03.527: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:23:04.527: INFO: Number of nodes with available pods: 0
Jul 27 11:23:04.527: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:23:05.616: INFO: Number of nodes with available pods: 0
Jul 27 11:23:05.616: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:23:06.527: INFO: Number of nodes with available pods: 0
Jul 27 11:23:06.527: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:23:07.527: INFO: Number of nodes with available pods: 1
Jul 27 11:23:07.527: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2436, will wait for the garbage collector to delete the pods
Jul 27 11:23:07.605: INFO: Deleting DaemonSet.extensions daemon-set took: 19.071647ms
Jul 27 11:23:07.905: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.293868ms
Jul 27 11:23:13.413: INFO: Number of nodes with available pods: 0
Jul 27 11:23:13.413: INFO: Number of running nodes: 0, number of available pods: 0
Jul 27 11:23:13.415: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2436/daemonsets","resourceVersion":"4561468"},"items":null}

Jul 27 11:23:13.418: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2436/pods","resourceVersion":"4561469"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:23:13.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2436" for this suite.

• [SLOW TEST:23.820 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":184,"skipped":3323,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:23:13.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-404e1ed6-858c-447c-9d24-8dfa71d21328
STEP: Creating configMap with name cm-test-opt-upd-ed0c87ac-e3c4-4d08-b6eb-6e87d244dc8c
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-404e1ed6-858c-447c-9d24-8dfa71d21328
STEP: Updating configmap cm-test-opt-upd-ed0c87ac-e3c4-4d08-b6eb-6e87d244dc8c
STEP: Creating configMap with name cm-test-opt-create-dc8f7e2f-4491-472b-8503-39540c5fa924
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:23:23.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2813" for this suite.

• [SLOW TEST:10.397 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":185,"skipped":3333,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:23:23.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 27 11:23:23.971: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae7efbbf-bc05-4039-8930-251b74273bdd" in namespace "downward-api-9795" to be "Succeeded or Failed"
Jul 27 11:23:23.997: INFO: Pod "downwardapi-volume-ae7efbbf-bc05-4039-8930-251b74273bdd": Phase="Pending", Reason="", readiness=false. Elapsed: 25.805081ms
Jul 27 11:23:26.004: INFO: Pod "downwardapi-volume-ae7efbbf-bc05-4039-8930-251b74273bdd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032902066s
Jul 27 11:23:28.009: INFO: Pod "downwardapi-volume-ae7efbbf-bc05-4039-8930-251b74273bdd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037297676s
STEP: Saw pod success
Jul 27 11:23:28.009: INFO: Pod "downwardapi-volume-ae7efbbf-bc05-4039-8930-251b74273bdd" satisfied condition "Succeeded or Failed"
Jul 27 11:23:28.012: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-ae7efbbf-bc05-4039-8930-251b74273bdd container client-container: 
STEP: delete the pod
Jul 27 11:23:28.052: INFO: Waiting for pod downwardapi-volume-ae7efbbf-bc05-4039-8930-251b74273bdd to disappear
Jul 27 11:23:28.066: INFO: Pod downwardapi-volume-ae7efbbf-bc05-4039-8930-251b74273bdd no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:23:28.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9795" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3378,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:23:28.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
STEP: reading a file in the container
Jul 27 11:23:34.699: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8599 pod-service-account-7457533b-a77a-475d-9332-a0ca8081d293 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jul 27 11:23:34.937: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8599 pod-service-account-7457533b-a77a-475d-9332-a0ca8081d293 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jul 27 11:23:35.146: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8599 pod-service-account-7457533b-a77a-475d-9332-a0ca8081d293 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:23:35.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8599" for this suite.

• [SLOW TEST:7.292 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":275,"completed":187,"skipped":3389,"failed":0}
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:23:35.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-ffe55a05-311b-431a-9d35-6e9a120c539d
STEP: Creating a pod to test consume configMaps
Jul 27 11:23:35.635: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d0595070-1bb8-4c80-8065-1d4152ff8157" in namespace "projected-4118" to be "Succeeded or Failed"
Jul 27 11:23:35.678: INFO: Pod "pod-projected-configmaps-d0595070-1bb8-4c80-8065-1d4152ff8157": Phase="Pending", Reason="", readiness=false. Elapsed: 43.615965ms
Jul 27 11:23:37.682: INFO: Pod "pod-projected-configmaps-d0595070-1bb8-4c80-8065-1d4152ff8157": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047076942s
Jul 27 11:23:39.686: INFO: Pod "pod-projected-configmaps-d0595070-1bb8-4c80-8065-1d4152ff8157": Phase="Pending", Reason="", readiness=false. Elapsed: 4.051566378s
Jul 27 11:23:41.690: INFO: Pod "pod-projected-configmaps-d0595070-1bb8-4c80-8065-1d4152ff8157": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055727283s
STEP: Saw pod success
Jul 27 11:23:41.690: INFO: Pod "pod-projected-configmaps-d0595070-1bb8-4c80-8065-1d4152ff8157" satisfied condition "Succeeded or Failed"
Jul 27 11:23:41.694: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-d0595070-1bb8-4c80-8065-1d4152ff8157 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 27 11:23:41.729: INFO: Waiting for pod pod-projected-configmaps-d0595070-1bb8-4c80-8065-1d4152ff8157 to disappear
Jul 27 11:23:41.771: INFO: Pod pod-projected-configmaps-d0595070-1bb8-4c80-8065-1d4152ff8157 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:23:41.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4118" for this suite.

• [SLOW TEST:6.412 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":188,"skipped":3389,"failed":0}
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:23:41.779: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6745
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Jul 27 11:23:41.943: INFO: Found 0 stateful pods, waiting for 3
Jul 27 11:23:51.947: INFO: Found 2 stateful pods, waiting for 3
Jul 27 11:24:01.948: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 27 11:24:01.948: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 27 11:24:01.948: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jul 27 11:24:01.975: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jul 27 11:24:12.155: INFO: Updating stateful set ss2
Jul 27 11:24:12.192: INFO: Waiting for Pod statefulset-6745/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 27 11:24:22.407: INFO: Waiting for Pod statefulset-6745/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jul 27 11:24:32.719: INFO: Found 2 stateful pods, waiting for 3
Jul 27 11:24:42.724: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 27 11:24:42.724: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 27 11:24:42.724: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jul 27 11:24:42.750: INFO: Updating stateful set ss2
Jul 27 11:24:42.814: INFO: Waiting for Pod statefulset-6745/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 27 11:24:52.822: INFO: Waiting for Pod statefulset-6745/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 27 11:25:02.841: INFO: Updating stateful set ss2
Jul 27 11:25:02.910: INFO: Waiting for StatefulSet statefulset-6745/ss2 to complete update
Jul 27 11:25:02.910: INFO: Waiting for Pod statefulset-6745/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jul 27 11:25:12.918: INFO: Waiting for StatefulSet statefulset-6745/ss2 to complete update
Jul 27 11:25:12.918: INFO: Waiting for Pod statefulset-6745/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 27 11:25:22.918: INFO: Deleting all statefulset in ns statefulset-6745
Jul 27 11:25:22.921: INFO: Scaling statefulset ss2 to 0
Jul 27 11:26:02.963: INFO: Waiting for statefulset status.replicas updated to 0
Jul 27 11:26:02.966: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:26:02.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6745" for this suite.

• [SLOW TEST:141.223 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":189,"skipped":3398,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:26:03.004: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 27 11:26:03.097: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ffc69310-af20-4025-93e5-ffef9105a0f9" in namespace "downward-api-2513" to be "Succeeded or Failed"
Jul 27 11:26:03.119: INFO: Pod "downwardapi-volume-ffc69310-af20-4025-93e5-ffef9105a0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 21.683836ms
Jul 27 11:26:05.123: INFO: Pod "downwardapi-volume-ffc69310-af20-4025-93e5-ffef9105a0f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025879349s
Jul 27 11:26:07.127: INFO: Pod "downwardapi-volume-ffc69310-af20-4025-93e5-ffef9105a0f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029698519s
STEP: Saw pod success
Jul 27 11:26:07.127: INFO: Pod "downwardapi-volume-ffc69310-af20-4025-93e5-ffef9105a0f9" satisfied condition "Succeeded or Failed"
Jul 27 11:26:07.130: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-ffc69310-af20-4025-93e5-ffef9105a0f9 container client-container: 
STEP: delete the pod
Jul 27 11:26:07.167: INFO: Waiting for pod downwardapi-volume-ffc69310-af20-4025-93e5-ffef9105a0f9 to disappear
Jul 27 11:26:07.184: INFO: Pod downwardapi-volume-ffc69310-af20-4025-93e5-ffef9105a0f9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:26:07.185: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2513" for this suite.
•{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":190,"skipped":3457,"failed":0}
SSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:26:07.196: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-66gtt in namespace proxy-1248
I0727 11:26:07.416953       7 runners.go:190] Created replication controller with name: proxy-service-66gtt, namespace: proxy-1248, replica count: 1
I0727 11:26:08.467478       7 runners.go:190] proxy-service-66gtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0727 11:26:09.467721       7 runners.go:190] proxy-service-66gtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0727 11:26:10.467983       7 runners.go:190] proxy-service-66gtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0727 11:26:11.468361       7 runners.go:190] proxy-service-66gtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0727 11:26:12.468638       7 runners.go:190] proxy-service-66gtt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0727 11:26:13.468949       7 runners.go:190] proxy-service-66gtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0727 11:26:14.469182       7 runners.go:190] proxy-service-66gtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0727 11:26:15.469468       7 runners.go:190] proxy-service-66gtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0727 11:26:16.469699       7 runners.go:190] proxy-service-66gtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0727 11:26:17.469950       7 runners.go:190] proxy-service-66gtt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0727 11:26:18.470187       7 runners.go:190] proxy-service-66gtt Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 27 11:26:18.475: INFO: setup took 11.154540122s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jul 27 11:26:18.482: INFO: (0) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 6.938296ms)
Jul 27 11:26:18.482: INFO: (0) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 6.93816ms)
Jul 27 11:26:18.485: INFO: (0) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 9.629088ms)
Jul 27 11:26:18.485: INFO: (0) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm/proxy/: test (200; 9.855354ms)
Jul 27 11:26:18.485: INFO: (0) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:1080/proxy/: ... (200; 10.090171ms)
Jul 27 11:26:18.485: INFO: (0) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 9.848931ms)
Jul 27 11:26:18.486: INFO: (0) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname2/proxy/: bar (200; 10.372846ms)
Jul 27 11:26:18.486: INFO: (0) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 10.576736ms)
Jul 27 11:26:18.487: INFO: (0) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname2/proxy/: bar (200; 11.894612ms)
Jul 27 11:26:18.488: INFO: (0) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname1/proxy/: foo (200; 12.320978ms)
Jul 27 11:26:18.489: INFO: (0) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname1/proxy/: foo (200; 13.949847ms)
Jul 27 11:26:18.490: INFO: (0) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:462/proxy/: tls qux (200; 14.791862ms)
Jul 27 11:26:18.490: INFO: (0) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname1/proxy/: tls baz (200; 14.836701ms)
Jul 27 11:26:18.490: INFO: (0) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 15.093136ms)
Jul 27 11:26:18.490: INFO: (0) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname2/proxy/: tls qux (200; 14.98568ms)
Jul 27 11:26:18.491: INFO: (0) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: test (200; 3.943514ms)
Jul 27 11:26:18.495: INFO: (1) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:1080/proxy/: ... (200; 4.0269ms)
Jul 27 11:26:18.495: INFO: (1) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 3.993002ms)
Jul 27 11:26:18.495: INFO: (1) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: test<... (200; 4.413915ms)
Jul 27 11:26:18.495: INFO: (1) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:462/proxy/: tls qux (200; 4.487455ms)
Jul 27 11:26:18.495: INFO: (1) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 4.384248ms)
Jul 27 11:26:18.495: INFO: (1) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname2/proxy/: tls qux (200; 4.456729ms)
Jul 27 11:26:18.495: INFO: (1) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname2/proxy/: bar (200; 4.59307ms)
Jul 27 11:26:18.497: INFO: (1) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname2/proxy/: bar (200; 5.988633ms)
Jul 27 11:26:18.497: INFO: (1) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname1/proxy/: foo (200; 6.008996ms)
Jul 27 11:26:18.497: INFO: (1) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname1/proxy/: tls baz (200; 6.064887ms)
Jul 27 11:26:18.497: INFO: (1) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname1/proxy/: foo (200; 6.162572ms)
Jul 27 11:26:18.500: INFO: (2) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 2.502615ms)
Jul 27 11:26:18.500: INFO: (2) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:462/proxy/: tls qux (200; 2.8644ms)
Jul 27 11:26:18.500: INFO: (2) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 2.841076ms)
Jul 27 11:26:18.501: INFO: (2) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:1080/proxy/: ... (200; 4.166138ms)
Jul 27 11:26:18.502: INFO: (2) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm/proxy/: test (200; 4.700228ms)
Jul 27 11:26:18.502: INFO: (2) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 4.809799ms)
Jul 27 11:26:18.502: INFO: (2) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 4.826953ms)
Jul 27 11:26:18.502: INFO: (2) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 5.039964ms)
Jul 27 11:26:18.502: INFO: (2) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 5.110531ms)
Jul 27 11:26:18.503: INFO: (2) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: test (200; 4.686937ms)
Jul 27 11:26:18.509: INFO: (3) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname1/proxy/: tls baz (200; 4.721333ms)
Jul 27 11:26:18.509: INFO: (3) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 4.859114ms)
Jul 27 11:26:18.509: INFO: (3) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname2/proxy/: bar (200; 4.860062ms)
Jul 27 11:26:18.509: INFO: (3) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:1080/proxy/: ... (200; 4.851404ms)
Jul 27 11:26:18.509: INFO: (3) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname1/proxy/: foo (200; 4.940188ms)
Jul 27 11:26:18.509: INFO: (3) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: test<... (200; 4.972381ms)
Jul 27 11:26:18.509: INFO: (3) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname1/proxy/: foo (200; 4.982664ms)
Jul 27 11:26:18.509: INFO: (3) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 5.012178ms)
Jul 27 11:26:18.510: INFO: (3) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 5.392764ms)
Jul 27 11:26:18.510: INFO: (3) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:462/proxy/: tls qux (200; 5.383834ms)
Jul 27 11:26:18.510: INFO: (3) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 5.45615ms)
Jul 27 11:26:18.510: INFO: (3) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 5.415814ms)
Jul 27 11:26:18.513: INFO: (4) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 3.12442ms)
Jul 27 11:26:18.513: INFO: (4) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm/proxy/: test (200; 3.141805ms)
Jul 27 11:26:18.514: INFO: (4) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname1/proxy/: tls baz (200; 4.063047ms)
Jul 27 11:26:18.514: INFO: (4) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 4.282552ms)
Jul 27 11:26:18.514: INFO: (4) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 4.325715ms)
Jul 27 11:26:18.515: INFO: (4) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname2/proxy/: tls qux (200; 4.759292ms)
Jul 27 11:26:18.515: INFO: (4) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:462/proxy/: tls qux (200; 4.79011ms)
Jul 27 11:26:18.515: INFO: (4) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:1080/proxy/: ... (200; 4.799957ms)
Jul 27 11:26:18.515: INFO: (4) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 4.978549ms)
Jul 27 11:26:18.515: INFO: (4) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname2/proxy/: bar (200; 5.412596ms)
Jul 27 11:26:18.515: INFO: (4) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 5.429808ms)
Jul 27 11:26:18.515: INFO: (4) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 5.7386ms)
Jul 27 11:26:18.515: INFO: (4) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: test<... (200; 3.724526ms)
Jul 27 11:26:18.520: INFO: (5) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 3.741328ms)
Jul 27 11:26:18.520: INFO: (5) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm/proxy/: test (200; 3.695912ms)
Jul 27 11:26:18.520: INFO: (5) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 3.643605ms)
Jul 27 11:26:18.520: INFO: (5) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: ... (200; 4.03884ms)
Jul 27 11:26:18.520: INFO: (5) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:462/proxy/: tls qux (200; 4.175147ms)
Jul 27 11:26:18.521: INFO: (5) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 4.985192ms)
Jul 27 11:26:18.521: INFO: (5) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname1/proxy/: foo (200; 5.105415ms)
Jul 27 11:26:18.521: INFO: (5) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname1/proxy/: foo (200; 5.089788ms)
Jul 27 11:26:18.521: INFO: (5) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname2/proxy/: bar (200; 5.11862ms)
Jul 27 11:26:18.521: INFO: (5) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname2/proxy/: tls qux (200; 5.141293ms)
Jul 27 11:26:18.521: INFO: (5) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname1/proxy/: tls baz (200; 5.101464ms)
Jul 27 11:26:18.521: INFO: (5) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname2/proxy/: bar (200; 5.140934ms)
Jul 27 11:26:18.525: INFO: (6) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm/proxy/: test (200; 3.318652ms)
Jul 27 11:26:18.526: INFO: (6) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:462/proxy/: tls qux (200; 4.412078ms)
Jul 27 11:26:18.526: INFO: (6) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname1/proxy/: tls baz (200; 4.763627ms)
Jul 27 11:26:18.526: INFO: (6) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 4.767919ms)
Jul 27 11:26:18.526: INFO: (6) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname1/proxy/: foo (200; 5.155443ms)
Jul 27 11:26:18.526: INFO: (6) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 5.031406ms)
Jul 27 11:26:18.527: INFO: (6) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 5.032736ms)
Jul 27 11:26:18.527: INFO: (6) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 5.185351ms)
Jul 27 11:26:18.527: INFO: (6) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 5.088234ms)
Jul 27 11:26:18.527: INFO: (6) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:1080/proxy/: ... (200; 5.091461ms)
Jul 27 11:26:18.527: INFO: (6) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 5.196024ms)
Jul 27 11:26:18.527: INFO: (6) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: test<... (200; 7.607923ms)
Jul 27 11:26:18.536: INFO: (7) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm/proxy/: test (200; 7.999769ms)
Jul 27 11:26:18.537: INFO: (7) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 8.617682ms)
Jul 27 11:26:18.537: INFO: (7) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname2/proxy/: bar (200; 8.751676ms)
Jul 27 11:26:18.537: INFO: (7) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: ... (200; 9.609999ms)
Jul 27 11:26:18.540: INFO: (8) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 2.012483ms)
Jul 27 11:26:18.540: INFO: (8) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: test (200; 3.807261ms)
Jul 27 11:26:18.541: INFO: (8) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 3.818309ms)
Jul 27 11:26:18.541: INFO: (8) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 3.849891ms)
Jul 27 11:26:18.541: INFO: (8) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 3.881946ms)
Jul 27 11:26:18.541: INFO: (8) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 3.952208ms)
Jul 27 11:26:18.541: INFO: (8) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:462/proxy/: tls qux (200; 3.98136ms)
Jul 27 11:26:18.541: INFO: (8) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname2/proxy/: tls qux (200; 3.912609ms)
Jul 27 11:26:18.542: INFO: (8) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname2/proxy/: bar (200; 4.235501ms)
Jul 27 11:26:18.542: INFO: (8) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname1/proxy/: foo (200; 4.25083ms)
Jul 27 11:26:18.542: INFO: (8) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname1/proxy/: foo (200; 4.216024ms)
Jul 27 11:26:18.542: INFO: (8) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname2/proxy/: bar (200; 4.238308ms)
Jul 27 11:26:18.542: INFO: (8) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname1/proxy/: tls baz (200; 4.23586ms)
Jul 27 11:26:18.542: INFO: (8) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:1080/proxy/: ... (200; 4.291211ms)
Jul 27 11:26:18.546: INFO: (9) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 3.724297ms)
Jul 27 11:26:18.546: INFO: (9) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 3.815001ms)
Jul 27 11:26:18.546: INFO: (9) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 3.886749ms)
Jul 27 11:26:18.547: INFO: (9) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname1/proxy/: foo (200; 4.682043ms)
Jul 27 11:26:18.547: INFO: (9) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:1080/proxy/: ... (200; 4.703795ms)
Jul 27 11:26:18.547: INFO: (9) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm/proxy/: test (200; 4.664836ms)
Jul 27 11:26:18.547: INFO: (9) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname2/proxy/: bar (200; 4.675783ms)
Jul 27 11:26:18.547: INFO: (9) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 4.716645ms)
Jul 27 11:26:18.547: INFO: (9) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname1/proxy/: foo (200; 4.706266ms)
Jul 27 11:26:18.547: INFO: (9) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:462/proxy/: tls qux (200; 4.701769ms)
Jul 27 11:26:18.547: INFO: (9) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: test (200; 3.947795ms)
Jul 27 11:26:18.551: INFO: (10) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 3.955221ms)
Jul 27 11:26:18.551: INFO: (10) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 4.206503ms)
Jul 27 11:26:18.552: INFO: (10) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 4.700657ms)
Jul 27 11:26:18.552: INFO: (10) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:1080/proxy/: ... (200; 4.798355ms)
Jul 27 11:26:18.552: INFO: (10) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname1/proxy/: foo (200; 4.834983ms)
Jul 27 11:26:18.552: INFO: (10) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname2/proxy/: bar (200; 4.842704ms)
Jul 27 11:26:18.552: INFO: (10) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname2/proxy/: tls qux (200; 5.155122ms)
Jul 27 11:26:18.552: INFO: (10) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname1/proxy/: tls baz (200; 5.130643ms)
Jul 27 11:26:18.552: INFO: (10) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: test (200; 3.235481ms)
Jul 27 11:26:18.558: INFO: (11) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 3.259301ms)
Jul 27 11:26:18.558: INFO: (11) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 3.305858ms)
Jul 27 11:26:18.558: INFO: (11) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 3.295574ms)
Jul 27 11:26:18.558: INFO: (11) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 3.3192ms)
Jul 27 11:26:18.558: INFO: (11) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:462/proxy/: tls qux (200; 3.425692ms)
Jul 27 11:26:18.559: INFO: (11) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:1080/proxy/: ... (200; 3.541242ms)
Jul 27 11:26:18.559: INFO: (11) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 3.874659ms)
Jul 27 11:26:18.559: INFO: (11) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 4.003016ms)
Jul 27 11:26:18.559: INFO: (11) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname2/proxy/: tls qux (200; 4.292861ms)
Jul 27 11:26:18.559: INFO: (11) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: ... (200; 5.163922ms)
Jul 27 11:26:18.565: INFO: (12) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: test (200; 5.328795ms)
Jul 27 11:26:18.565: INFO: (12) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 5.59415ms)
Jul 27 11:26:18.565: INFO: (12) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname2/proxy/: tls qux (200; 5.615881ms)
Jul 27 11:26:18.565: INFO: (12) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 5.612082ms)
Jul 27 11:26:18.565: INFO: (12) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname1/proxy/: tls baz (200; 5.567097ms)
Jul 27 11:26:18.570: INFO: (13) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname2/proxy/: bar (200; 4.901691ms)
Jul 27 11:26:18.570: INFO: (13) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname1/proxy/: foo (200; 4.933449ms)
Jul 27 11:26:18.571: INFO: (13) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm/proxy/: test (200; 5.310759ms)
Jul 27 11:26:18.571: INFO: (13) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname2/proxy/: tls qux (200; 5.322559ms)
Jul 27 11:26:18.571: INFO: (13) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname2/proxy/: bar (200; 5.362443ms)
Jul 27 11:26:18.571: INFO: (13) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname1/proxy/: foo (200; 5.410439ms)
Jul 27 11:26:18.571: INFO: (13) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname1/proxy/: tls baz (200; 5.345732ms)
Jul 27 11:26:18.571: INFO: (13) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 5.533982ms)
Jul 27 11:26:18.571: INFO: (13) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 5.596059ms)
Jul 27 11:26:18.571: INFO: (13) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: ... (200; 5.928791ms)
Jul 27 11:26:18.571: INFO: (13) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 6.030335ms)
Jul 27 11:26:18.572: INFO: (13) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 6.314846ms)
Jul 27 11:26:18.572: INFO: (13) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 6.382017ms)
Jul 27 11:26:18.572: INFO: (13) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 6.3795ms)
Jul 27 11:26:18.572: INFO: (13) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:462/proxy/: tls qux (200; 6.479596ms)
Jul 27 11:26:18.575: INFO: (14) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 3.409516ms)
Jul 27 11:26:18.576: INFO: (14) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname1/proxy/: tls baz (200; 4.459131ms)
Jul 27 11:26:18.576: INFO: (14) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname1/proxy/: foo (200; 4.557592ms)
Jul 27 11:26:18.576: INFO: (14) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname2/proxy/: bar (200; 4.506822ms)
Jul 27 11:26:18.577: INFO: (14) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:462/proxy/: tls qux (200; 4.618602ms)
Jul 27 11:26:18.577: INFO: (14) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname1/proxy/: foo (200; 4.606158ms)
Jul 27 11:26:18.577: INFO: (14) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname2/proxy/: tls qux (200; 4.681775ms)
Jul 27 11:26:18.577: INFO: (14) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname2/proxy/: bar (200; 4.866629ms)
Jul 27 11:26:18.577: INFO: (14) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm/proxy/: test (200; 4.870869ms)
Jul 27 11:26:18.577: INFO: (14) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 5.186648ms)
Jul 27 11:26:18.577: INFO: (14) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 5.213481ms)
Jul 27 11:26:18.577: INFO: (14) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:1080/proxy/: ... (200; 5.232192ms)
Jul 27 11:26:18.577: INFO: (14) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 5.233748ms)
Jul 27 11:26:18.577: INFO: (14) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 5.291773ms)
Jul 27 11:26:18.577: INFO: (14) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 5.300048ms)
Jul 27 11:26:18.577: INFO: (14) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: ... (200; 5.623836ms)
Jul 27 11:26:18.583: INFO: (15) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname2/proxy/: bar (200; 5.649563ms)
Jul 27 11:26:18.583: INFO: (15) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname2/proxy/: tls qux (200; 5.696798ms)
Jul 27 11:26:18.583: INFO: (15) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm/proxy/: test (200; 5.674459ms)
Jul 27 11:26:18.583: INFO: (15) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 5.682824ms)
Jul 27 11:26:18.583: INFO: (15) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 5.682884ms)
Jul 27 11:26:18.583: INFO: (15) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 5.687986ms)
Jul 27 11:26:18.583: INFO: (15) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 5.743082ms)
Jul 27 11:26:18.583: INFO: (15) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname1/proxy/: foo (200; 5.711065ms)
Jul 27 11:26:18.583: INFO: (15) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname2/proxy/: bar (200; 5.692731ms)
Jul 27 11:26:18.583: INFO: (15) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname1/proxy/: tls baz (200; 5.677827ms)
Jul 27 11:26:18.588: INFO: (16) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 4.964441ms)
Jul 27 11:26:18.589: INFO: (16) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm/proxy/: test (200; 5.94776ms)
Jul 27 11:26:18.590: INFO: (16) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 6.71341ms)
Jul 27 11:26:18.590: INFO: (16) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 6.643615ms)
Jul 27 11:26:18.590: INFO: (16) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname2/proxy/: bar (200; 6.646461ms)
Jul 27 11:26:18.590: INFO: (16) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:1080/proxy/: ... (200; 6.729215ms)
Jul 27 11:26:18.590: INFO: (16) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 6.708955ms)
Jul 27 11:26:18.590: INFO: (16) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 6.65987ms)
Jul 27 11:26:18.590: INFO: (16) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname1/proxy/: foo (200; 6.860833ms)
Jul 27 11:26:18.590: INFO: (16) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname1/proxy/: foo (200; 6.842786ms)
Jul 27 11:26:18.590: INFO: (16) /api/v1/namespaces/proxy-1248/services/http:proxy-service-66gtt:portname2/proxy/: bar (200; 6.894072ms)
Jul 27 11:26:18.590: INFO: (16) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname2/proxy/: tls qux (200; 7.046879ms)
Jul 27 11:26:18.590: INFO: (16) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 6.93154ms)
Jul 27 11:26:18.590: INFO: (16) /api/v1/namespaces/proxy-1248/services/https:proxy-service-66gtt:tlsportname1/proxy/: tls baz (200; 6.925849ms)
Jul 27 11:26:18.590: INFO: (16) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: test<... (200; 32.080848ms)
Jul 27 11:26:18.623: INFO: (17) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:1080/proxy/: ... (200; 32.21857ms)
Jul 27 11:26:18.623: INFO: (17) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:462/proxy/: tls qux (200; 32.284411ms)
Jul 27 11:26:18.623: INFO: (17) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 32.331421ms)
Jul 27 11:26:18.623: INFO: (17) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm/proxy/: test (200; 32.35478ms)
Jul 27 11:26:18.623: INFO: (17) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 32.324224ms)
Jul 27 11:26:18.623: INFO: (17) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 32.419155ms)
Jul 27 11:26:18.623: INFO: (17) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 32.677986ms)
Jul 27 11:26:18.624: INFO: (17) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 32.928515ms)
Jul 27 11:26:18.624: INFO: (17) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: test (200; 10.756772ms)
Jul 27 11:26:18.636: INFO: (18) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 10.58143ms)
Jul 27 11:26:18.637: INFO: (18) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 10.387199ms)
Jul 27 11:26:18.637: INFO: (18) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 10.921294ms)
Jul 27 11:26:18.637: INFO: (18) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 10.66016ms)
Jul 27 11:26:18.637: INFO: (18) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: ... (200; 10.624707ms)
Jul 27 11:26:18.639: INFO: (19) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 1.888324ms)
Jul 27 11:26:18.640: INFO: (19) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:162/proxy/: bar (200; 3.262323ms)
Jul 27 11:26:18.640: INFO: (19) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm/proxy/: test (200; 3.434268ms)
Jul 27 11:26:18.641: INFO: (19) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:460/proxy/: tls baz (200; 3.564418ms)
Jul 27 11:26:18.641: INFO: (19) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:160/proxy/: foo (200; 3.722333ms)
Jul 27 11:26:18.641: INFO: (19) /api/v1/namespaces/proxy-1248/services/proxy-service-66gtt:portname1/proxy/: foo (200; 3.829657ms)
Jul 27 11:26:18.641: INFO: (19) /api/v1/namespaces/proxy-1248/pods/http:proxy-service-66gtt-jmmsm:1080/proxy/: ... (200; 3.847903ms)
Jul 27 11:26:18.641: INFO: (19) /api/v1/namespaces/proxy-1248/pods/proxy-service-66gtt-jmmsm:1080/proxy/: test<... (200; 3.888837ms)
Jul 27 11:26:18.641: INFO: (19) /api/v1/namespaces/proxy-1248/pods/https:proxy-service-66gtt-jmmsm:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-1093
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 27 11:26:23.696: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jul 27 11:26:23.770: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 27 11:26:25.774: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 27 11:26:27.775: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:26:29.774: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:26:31.774: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:26:33.774: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:26:35.774: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:26:37.774: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:26:39.774: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:26:41.774: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jul 27 11:26:41.780: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jul 27 11:26:45.854: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.213:8080/dial?request=hostname&protocol=http&host=10.244.2.212&port=8080&tries=1'] Namespace:pod-network-test-1093 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 27 11:26:45.854: INFO: >>> kubeConfig: /root/.kube/config
I0727 11:26:45.885298       7 log.go:172] (0xc002569600) (0xc001baa6e0) Create stream
I0727 11:26:45.885329       7 log.go:172] (0xc002569600) (0xc001baa6e0) Stream added, broadcasting: 1
I0727 11:26:45.887389       7 log.go:172] (0xc002569600) Reply frame received for 1
I0727 11:26:45.887425       7 log.go:172] (0xc002569600) (0xc001946500) Create stream
I0727 11:26:45.887439       7 log.go:172] (0xc002569600) (0xc001946500) Stream added, broadcasting: 3
I0727 11:26:45.888839       7 log.go:172] (0xc002569600) Reply frame received for 3
I0727 11:26:45.888909       7 log.go:172] (0xc002569600) (0xc002741e00) Create stream
I0727 11:26:45.888936       7 log.go:172] (0xc002569600) (0xc002741e00) Stream added, broadcasting: 5
I0727 11:26:45.890111       7 log.go:172] (0xc002569600) Reply frame received for 5
I0727 11:26:45.986253       7 log.go:172] (0xc002569600) Data frame received for 3
I0727 11:26:45.986288       7 log.go:172] (0xc001946500) (3) Data frame handling
I0727 11:26:45.986319       7 log.go:172] (0xc001946500) (3) Data frame sent
I0727 11:26:45.986711       7 log.go:172] (0xc002569600) Data frame received for 3
I0727 11:26:45.986751       7 log.go:172] (0xc001946500) (3) Data frame handling
I0727 11:26:45.987050       7 log.go:172] (0xc002569600) Data frame received for 5
I0727 11:26:45.987077       7 log.go:172] (0xc002741e00) (5) Data frame handling
I0727 11:26:45.988947       7 log.go:172] (0xc002569600) Data frame received for 1
I0727 11:26:45.988965       7 log.go:172] (0xc001baa6e0) (1) Data frame handling
I0727 11:26:45.988976       7 log.go:172] (0xc001baa6e0) (1) Data frame sent
I0727 11:26:45.989098       7 log.go:172] (0xc002569600) (0xc001baa6e0) Stream removed, broadcasting: 1
I0727 11:26:45.989132       7 log.go:172] (0xc002569600) Go away received
I0727 11:26:45.989201       7 log.go:172] (0xc002569600) (0xc001baa6e0) Stream removed, broadcasting: 1
I0727 11:26:45.989215       7 log.go:172] (0xc002569600) (0xc001946500) Stream removed, broadcasting: 3
I0727 11:26:45.989220       7 log.go:172] (0xc002569600) (0xc002741e00) Stream removed, broadcasting: 5
Jul 27 11:26:45.989: INFO: Waiting for responses: map[]
Jul 27 11:26:45.992: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.213:8080/dial?request=hostname&protocol=http&host=10.244.1.85&port=8080&tries=1'] Namespace:pod-network-test-1093 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 27 11:26:45.992: INFO: >>> kubeConfig: /root/.kube/config
I0727 11:26:46.028806       7 log.go:172] (0xc002d660b0) (0xc001d66140) Create stream
I0727 11:26:46.028848       7 log.go:172] (0xc002d660b0) (0xc001d66140) Stream added, broadcasting: 1
I0727 11:26:46.032478       7 log.go:172] (0xc002d660b0) Reply frame received for 1
I0727 11:26:46.032522       7 log.go:172] (0xc002d660b0) (0xc001d661e0) Create stream
I0727 11:26:46.032538       7 log.go:172] (0xc002d660b0) (0xc001d661e0) Stream added, broadcasting: 3
I0727 11:26:46.033757       7 log.go:172] (0xc002d660b0) Reply frame received for 3
I0727 11:26:46.033801       7 log.go:172] (0xc002d660b0) (0xc001d663c0) Create stream
I0727 11:26:46.033814       7 log.go:172] (0xc002d660b0) (0xc001d663c0) Stream added, broadcasting: 5
I0727 11:26:46.034607       7 log.go:172] (0xc002d660b0) Reply frame received for 5
I0727 11:26:46.095929       7 log.go:172] (0xc002d660b0) Data frame received for 3
I0727 11:26:46.095981       7 log.go:172] (0xc001d661e0) (3) Data frame handling
I0727 11:26:46.096017       7 log.go:172] (0xc001d661e0) (3) Data frame sent
I0727 11:26:46.096484       7 log.go:172] (0xc002d660b0) Data frame received for 5
I0727 11:26:46.096509       7 log.go:172] (0xc001d663c0) (5) Data frame handling
I0727 11:26:46.097072       7 log.go:172] (0xc002d660b0) Data frame received for 3
I0727 11:26:46.097097       7 log.go:172] (0xc001d661e0) (3) Data frame handling
I0727 11:26:46.098205       7 log.go:172] (0xc002d660b0) Data frame received for 1
I0727 11:26:46.098272       7 log.go:172] (0xc001d66140) (1) Data frame handling
I0727 11:26:46.098303       7 log.go:172] (0xc001d66140) (1) Data frame sent
I0727 11:26:46.098320       7 log.go:172] (0xc002d660b0) (0xc001d66140) Stream removed, broadcasting: 1
I0727 11:26:46.098336       7 log.go:172] (0xc002d660b0) Go away received
I0727 11:26:46.098495       7 log.go:172] (0xc002d660b0) (0xc001d66140) Stream removed, broadcasting: 1
I0727 11:26:46.098531       7 log.go:172] (0xc002d660b0) (0xc001d661e0) Stream removed, broadcasting: 3
I0727 11:26:46.098552       7 log.go:172] (0xc002d660b0) (0xc001d663c0) Stream removed, broadcasting: 5
Jul 27 11:26:46.098: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:26:46.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1093" for this suite.

• [SLOW TEST:22.498 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":192,"skipped":3464,"failed":0}
SSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:26:46.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:26:46.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8361
I0727 11:26:46.291716       7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8361, replica count: 1
I0727 11:26:47.342149       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0727 11:26:48.342364       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0727 11:26:49.342554       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0727 11:26:50.342788       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 27 11:26:50.657: INFO: Created: latency-svc-97zdc
Jul 27 11:26:50.710: INFO: Got endpoints: latency-svc-97zdc [267.123835ms]
Jul 27 11:26:51.089: INFO: Created: latency-svc-x5t29
Jul 27 11:26:51.128: INFO: Got endpoints: latency-svc-x5t29 [417.995618ms]
Jul 27 11:26:51.227: INFO: Created: latency-svc-qsql6
Jul 27 11:26:51.255: INFO: Got endpoints: latency-svc-qsql6 [544.761083ms]
Jul 27 11:26:51.480: INFO: Created: latency-svc-qj5bd
Jul 27 11:26:51.563: INFO: Got endpoints: latency-svc-qj5bd [852.654873ms]
Jul 27 11:26:51.660: INFO: Created: latency-svc-6prp5
Jul 27 11:26:51.691: INFO: Got endpoints: latency-svc-6prp5 [980.460894ms]
Jul 27 11:26:51.787: INFO: Created: latency-svc-5j4l5
Jul 27 11:26:52.056: INFO: Got endpoints: latency-svc-5j4l5 [1.346233454s]
Jul 27 11:26:52.234: INFO: Created: latency-svc-bm58s
Jul 27 11:26:52.266: INFO: Created: latency-svc-24xlt
Jul 27 11:26:52.266: INFO: Got endpoints: latency-svc-bm58s [1.555636102s]
Jul 27 11:26:52.761: INFO: Got endpoints: latency-svc-24xlt [2.051543875s]
Jul 27 11:26:52.963: INFO: Created: latency-svc-fc7x9
Jul 27 11:26:53.017: INFO: Got endpoints: latency-svc-fc7x9 [2.306969581s]
Jul 27 11:26:53.138: INFO: Created: latency-svc-lqbp9
Jul 27 11:26:53.143: INFO: Got endpoints: latency-svc-lqbp9 [2.432698369s]
Jul 27 11:26:53.527: INFO: Created: latency-svc-dlvgc
Jul 27 11:26:53.733: INFO: Got endpoints: latency-svc-dlvgc [3.022610607s]
Jul 27 11:26:54.134: INFO: Created: latency-svc-9vks7
Jul 27 11:26:54.318: INFO: Got endpoints: latency-svc-9vks7 [3.607800435s]
Jul 27 11:26:54.363: INFO: Created: latency-svc-xc7zm
Jul 27 11:26:54.398: INFO: Got endpoints: latency-svc-xc7zm [3.688082695s]
Jul 27 11:26:54.510: INFO: Created: latency-svc-b2rmm
Jul 27 11:26:54.803: INFO: Got endpoints: latency-svc-b2rmm [4.093333714s]
Jul 27 11:26:54.874: INFO: Created: latency-svc-dk9cq
Jul 27 11:26:54.899: INFO: Got endpoints: latency-svc-dk9cq [4.188654036s]
Jul 27 11:26:54.986: INFO: Created: latency-svc-g5zqv
Jul 27 11:26:55.108: INFO: Got endpoints: latency-svc-g5zqv [4.398032279s]
Jul 27 11:26:55.113: INFO: Created: latency-svc-kd5dp
Jul 27 11:26:55.149: INFO: Got endpoints: latency-svc-kd5dp [4.021140364s]
Jul 27 11:26:55.246: INFO: Created: latency-svc-7vdjz
Jul 27 11:26:55.258: INFO: Got endpoints: latency-svc-7vdjz [4.002841675s]
Jul 27 11:26:55.292: INFO: Created: latency-svc-fb58m
Jul 27 11:26:55.313: INFO: Got endpoints: latency-svc-fb58m [3.749934424s]
Jul 27 11:26:55.341: INFO: Created: latency-svc-wzdhc
Jul 27 11:26:55.389: INFO: Got endpoints: latency-svc-wzdhc [3.698793901s]
Jul 27 11:26:55.412: INFO: Created: latency-svc-j2njz
Jul 27 11:26:55.439: INFO: Got endpoints: latency-svc-j2njz [3.382643839s]
Jul 27 11:26:55.534: INFO: Created: latency-svc-ssdx5
Jul 27 11:26:55.551: INFO: Got endpoints: latency-svc-ssdx5 [3.285427552s]
Jul 27 11:26:55.605: INFO: Created: latency-svc-6bvds
Jul 27 11:26:55.613: INFO: Got endpoints: latency-svc-6bvds [2.851574921s]
Jul 27 11:26:55.697: INFO: Created: latency-svc-pg2wh
Jul 27 11:26:55.715: INFO: Got endpoints: latency-svc-pg2wh [2.698132647s]
Jul 27 11:26:55.766: INFO: Created: latency-svc-d25v9
Jul 27 11:26:55.782: INFO: Got endpoints: latency-svc-d25v9 [2.639381798s]
Jul 27 11:26:55.899: INFO: Created: latency-svc-829w6
Jul 27 11:26:55.915: INFO: Got endpoints: latency-svc-829w6 [2.181630296s]
Jul 27 11:26:56.006: INFO: Created: latency-svc-zbxh9
Jul 27 11:26:56.070: INFO: Got endpoints: latency-svc-zbxh9 [1.75221876s]
Jul 27 11:26:56.097: INFO: Created: latency-svc-vwbsw
Jul 27 11:26:56.168: INFO: Got endpoints: latency-svc-vwbsw [1.769876502s]
Jul 27 11:26:56.205: INFO: Created: latency-svc-gsrpw
Jul 27 11:26:56.216: INFO: Got endpoints: latency-svc-gsrpw [1.412789387s]
Jul 27 11:26:56.264: INFO: Created: latency-svc-c24bj
Jul 27 11:26:56.299: INFO: Got endpoints: latency-svc-c24bj [1.400446591s]
Jul 27 11:26:56.355: INFO: Created: latency-svc-phm2d
Jul 27 11:26:56.462: INFO: Got endpoints: latency-svc-phm2d [1.353682655s]
Jul 27 11:26:56.901: INFO: Created: latency-svc-zfmpl
Jul 27 11:26:56.952: INFO: Got endpoints: latency-svc-zfmpl [1.80306679s]
Jul 27 11:26:57.077: INFO: Created: latency-svc-hxss5
Jul 27 11:26:57.115: INFO: Got endpoints: latency-svc-hxss5 [1.856898499s]
Jul 27 11:26:57.671: INFO: Created: latency-svc-tsmjp
Jul 27 11:26:57.677: INFO: Got endpoints: latency-svc-tsmjp [2.364808608s]
Jul 27 11:26:57.730: INFO: Created: latency-svc-t9dcw
Jul 27 11:26:57.857: INFO: Got endpoints: latency-svc-t9dcw [2.467249004s]
Jul 27 11:26:58.006: INFO: Created: latency-svc-pkrmm
Jul 27 11:26:58.017: INFO: Got endpoints: latency-svc-pkrmm [2.577814883s]
Jul 27 11:26:58.049: INFO: Created: latency-svc-h7qkd
Jul 27 11:26:58.068: INFO: Got endpoints: latency-svc-h7qkd [2.516941578s]
Jul 27 11:26:58.175: INFO: Created: latency-svc-kw9g6
Jul 27 11:26:58.230: INFO: Got endpoints: latency-svc-kw9g6 [2.616572141s]
Jul 27 11:26:58.230: INFO: Created: latency-svc-f75rl
Jul 27 11:26:58.255: INFO: Got endpoints: latency-svc-f75rl [2.539086318s]
Jul 27 11:26:58.318: INFO: Created: latency-svc-hjq8c
Jul 27 11:26:58.322: INFO: Got endpoints: latency-svc-hjq8c [2.539342994s]
Jul 27 11:26:58.350: INFO: Created: latency-svc-855bc
Jul 27 11:26:58.374: INFO: Got endpoints: latency-svc-855bc [2.459705905s]
Jul 27 11:26:58.462: INFO: Created: latency-svc-nw7sd
Jul 27 11:26:58.471: INFO: Got endpoints: latency-svc-nw7sd [2.400684215s]
Jul 27 11:26:58.494: INFO: Created: latency-svc-rfg9w
Jul 27 11:26:58.524: INFO: Got endpoints: latency-svc-rfg9w [2.35658097s]
Jul 27 11:26:58.548: INFO: Created: latency-svc-2jxth
Jul 27 11:26:58.611: INFO: Got endpoints: latency-svc-2jxth [2.39428145s]
Jul 27 11:26:58.613: INFO: Created: latency-svc-vq5gj
Jul 27 11:26:58.621: INFO: Got endpoints: latency-svc-vq5gj [2.322185242s]
Jul 27 11:26:58.675: INFO: Created: latency-svc-crkt7
Jul 27 11:26:58.688: INFO: Got endpoints: latency-svc-crkt7 [2.22605214s]
Jul 27 11:26:58.710: INFO: Created: latency-svc-xjbmt
Jul 27 11:26:58.773: INFO: Got endpoints: latency-svc-xjbmt [1.82097024s]
Jul 27 11:26:58.776: INFO: Created: latency-svc-p78np
Jul 27 11:26:58.794: INFO: Got endpoints: latency-svc-p78np [1.678836665s]
Jul 27 11:26:58.825: INFO: Created: latency-svc-2gcnr
Jul 27 11:26:58.833: INFO: Got endpoints: latency-svc-2gcnr [1.15535213s]
Jul 27 11:26:58.865: INFO: Created: latency-svc-jshz2
Jul 27 11:26:58.910: INFO: Got endpoints: latency-svc-jshz2 [1.053501428s]
Jul 27 11:26:58.975: INFO: Created: latency-svc-fh9f5
Jul 27 11:26:58.989: INFO: Got endpoints: latency-svc-fh9f5 [972.054993ms]
Jul 27 11:26:59.060: INFO: Created: latency-svc-qdxvr
Jul 27 11:26:59.068: INFO: Got endpoints: latency-svc-qdxvr [999.695153ms]
Jul 27 11:26:59.123: INFO: Created: latency-svc-8frq8
Jul 27 11:26:59.159: INFO: Got endpoints: latency-svc-8frq8 [929.690994ms]
Jul 27 11:26:59.207: INFO: Created: latency-svc-r7crk
Jul 27 11:26:59.224: INFO: Got endpoints: latency-svc-r7crk [969.786052ms]
Jul 27 11:26:59.250: INFO: Created: latency-svc-dcz8g
Jul 27 11:26:59.266: INFO: Got endpoints: latency-svc-dcz8g [944.11698ms]
Jul 27 11:26:59.339: INFO: Created: latency-svc-jpk8x
Jul 27 11:26:59.356: INFO: Got endpoints: latency-svc-jpk8x [981.491314ms]
Jul 27 11:26:59.375: INFO: Created: latency-svc-mk9nb
Jul 27 11:26:59.386: INFO: Got endpoints: latency-svc-mk9nb [915.319902ms]
Jul 27 11:26:59.406: INFO: Created: latency-svc-wmxxl
Jul 27 11:26:59.479: INFO: Got endpoints: latency-svc-wmxxl [954.440316ms]
Jul 27 11:26:59.497: INFO: Created: latency-svc-86dvl
Jul 27 11:26:59.513: INFO: Got endpoints: latency-svc-86dvl [902.583667ms]
Jul 27 11:26:59.532: INFO: Created: latency-svc-76t8q
Jul 27 11:26:59.546: INFO: Got endpoints: latency-svc-76t8q [924.310828ms]
Jul 27 11:26:59.609: INFO: Created: latency-svc-2nzvl
Jul 27 11:26:59.627: INFO: Got endpoints: latency-svc-2nzvl [938.997514ms]
Jul 27 11:26:59.652: INFO: Created: latency-svc-wz6mh
Jul 27 11:26:59.664: INFO: Got endpoints: latency-svc-wz6mh [891.17229ms]
Jul 27 11:26:59.694: INFO: Created: latency-svc-ch2jr
Jul 27 11:26:59.766: INFO: Got endpoints: latency-svc-ch2jr [972.687723ms]
Jul 27 11:26:59.770: INFO: Created: latency-svc-dq2cm
Jul 27 11:26:59.778: INFO: Got endpoints: latency-svc-dq2cm [945.130507ms]
Jul 27 11:26:59.820: INFO: Created: latency-svc-kwzlz
Jul 27 11:26:59.833: INFO: Got endpoints: latency-svc-kwzlz [922.339548ms]
Jul 27 11:26:59.857: INFO: Created: latency-svc-zqkbh
Jul 27 11:26:59.911: INFO: Got endpoints: latency-svc-zqkbh [921.798484ms]
Jul 27 11:26:59.929: INFO: Created: latency-svc-n8wl5
Jul 27 11:26:59.941: INFO: Got endpoints: latency-svc-n8wl5 [872.766892ms]
Jul 27 11:26:59.969: INFO: Created: latency-svc-8tjfn
Jul 27 11:26:59.984: INFO: Got endpoints: latency-svc-8tjfn [824.146032ms]
Jul 27 11:27:00.005: INFO: Created: latency-svc-lln64
Jul 27 11:27:00.066: INFO: Got endpoints: latency-svc-lln64 [841.275806ms]
Jul 27 11:27:00.070: INFO: Created: latency-svc-t2dhb
Jul 27 11:27:00.096: INFO: Got endpoints: latency-svc-t2dhb [829.578364ms]
Jul 27 11:27:00.125: INFO: Created: latency-svc-6rlm8
Jul 27 11:27:00.141: INFO: Got endpoints: latency-svc-6rlm8 [784.671354ms]
Jul 27 11:27:00.166: INFO: Created: latency-svc-q86sn
Jul 27 11:27:00.222: INFO: Got endpoints: latency-svc-q86sn [835.428118ms]
Jul 27 11:27:00.246: INFO: Created: latency-svc-9d6kc
Jul 27 11:27:00.261: INFO: Got endpoints: latency-svc-9d6kc [781.961156ms]
Jul 27 11:27:00.293: INFO: Created: latency-svc-d9gbj
Jul 27 11:27:00.310: INFO: Got endpoints: latency-svc-d9gbj [796.139259ms]
Jul 27 11:27:00.359: INFO: Created: latency-svc-jkh7z
Jul 27 11:27:00.395: INFO: Got endpoints: latency-svc-jkh7z [849.147733ms]
Jul 27 11:27:00.431: INFO: Created: latency-svc-p8gsq
Jul 27 11:27:00.441: INFO: Got endpoints: latency-svc-p8gsq [813.832444ms]
Jul 27 11:27:00.551: INFO: Created: latency-svc-vcgh2
Jul 27 11:27:00.587: INFO: Got endpoints: latency-svc-vcgh2 [922.40772ms]
Jul 27 11:27:00.864: INFO: Created: latency-svc-drhbj
Jul 27 11:27:00.989: INFO: Got endpoints: latency-svc-drhbj [1.222350529s]
Jul 27 11:27:00.990: INFO: Created: latency-svc-w59pw
Jul 27 11:27:01.006: INFO: Got endpoints: latency-svc-w59pw [1.22819983s]
Jul 27 11:27:01.079: INFO: Created: latency-svc-sx8dg
Jul 27 11:27:01.258: INFO: Got endpoints: latency-svc-sx8dg [1.425503752s]
Jul 27 11:27:01.544: INFO: Created: latency-svc-bshfs
Jul 27 11:27:01.725: INFO: Got endpoints: latency-svc-bshfs [1.814479801s]
Jul 27 11:27:01.736: INFO: Created: latency-svc-24vwh
Jul 27 11:27:01.751: INFO: Got endpoints: latency-svc-24vwh [1.810414312s]
Jul 27 11:27:01.886: INFO: Created: latency-svc-2pbbv
Jul 27 11:27:01.895: INFO: Got endpoints: latency-svc-2pbbv [1.911405667s]
Jul 27 11:27:01.955: INFO: Created: latency-svc-ghf6c
Jul 27 11:27:01.972: INFO: Got endpoints: latency-svc-ghf6c [1.905746466s]
Jul 27 11:27:02.065: INFO: Created: latency-svc-qnrbz
Jul 27 11:27:02.068: INFO: Got endpoints: latency-svc-qnrbz [1.972375603s]
Jul 27 11:27:02.095: INFO: Created: latency-svc-6p8rp
Jul 27 11:27:02.129: INFO: Got endpoints: latency-svc-6p8rp [1.988819897s]
Jul 27 11:27:02.198: INFO: Created: latency-svc-xfcsj
Jul 27 11:27:02.201: INFO: Got endpoints: latency-svc-xfcsj [1.978733138s]
Jul 27 11:27:02.280: INFO: Created: latency-svc-wdwdt
Jul 27 11:27:02.291: INFO: Got endpoints: latency-svc-wdwdt [2.030105652s]
Jul 27 11:27:02.329: INFO: Created: latency-svc-kz4w6
Jul 27 11:27:02.358: INFO: Got endpoints: latency-svc-kz4w6 [2.0480194s]
Jul 27 11:27:02.358: INFO: Created: latency-svc-jqbzp
Jul 27 11:27:02.393: INFO: Got endpoints: latency-svc-jqbzp [1.998310719s]
Jul 27 11:27:02.466: INFO: Created: latency-svc-xxp97
Jul 27 11:27:02.479: INFO: Got endpoints: latency-svc-xxp97 [2.037364893s]
Jul 27 11:27:02.502: INFO: Created: latency-svc-zfvzf
Jul 27 11:27:02.514: INFO: Got endpoints: latency-svc-zfvzf [1.927553983s]
Jul 27 11:27:02.532: INFO: Created: latency-svc-bcz2d
Jul 27 11:27:02.556: INFO: Got endpoints: latency-svc-bcz2d [1.566762963s]
Jul 27 11:27:02.617: INFO: Created: latency-svc-r2qvw
Jul 27 11:27:02.629: INFO: Got endpoints: latency-svc-r2qvw [1.622237246s]
Jul 27 11:27:02.652: INFO: Created: latency-svc-829kb
Jul 27 11:27:02.665: INFO: Got endpoints: latency-svc-829kb [1.40698131s]
Jul 27 11:27:02.682: INFO: Created: latency-svc-z476n
Jul 27 11:27:02.696: INFO: Got endpoints: latency-svc-z476n [970.444311ms]
Jul 27 11:27:02.755: INFO: Created: latency-svc-zkvg8
Jul 27 11:27:02.779: INFO: Got endpoints: latency-svc-zkvg8 [1.027153219s]
Jul 27 11:27:02.779: INFO: Created: latency-svc-b2fcm
Jul 27 11:27:02.808: INFO: Got endpoints: latency-svc-b2fcm [912.526402ms]
Jul 27 11:27:02.846: INFO: Created: latency-svc-274s4
Jul 27 11:27:02.899: INFO: Got endpoints: latency-svc-274s4 [926.786976ms]
Jul 27 11:27:02.907: INFO: Created: latency-svc-cwjhw
Jul 27 11:27:02.940: INFO: Got endpoints: latency-svc-cwjhw [871.809582ms]
Jul 27 11:27:02.989: INFO: Created: latency-svc-8x4tg
Jul 27 11:27:03.042: INFO: Got endpoints: latency-svc-8x4tg [912.634237ms]
Jul 27 11:27:03.059: INFO: Created: latency-svc-fqtmf
Jul 27 11:27:03.075: INFO: Got endpoints: latency-svc-fqtmf [874.841434ms]
Jul 27 11:27:03.107: INFO: Created: latency-svc-h6cxh
Jul 27 11:27:03.130: INFO: Got endpoints: latency-svc-h6cxh [838.444684ms]
Jul 27 11:27:03.192: INFO: Created: latency-svc-nqjqb
Jul 27 11:27:03.208: INFO: Got endpoints: latency-svc-nqjqb [849.966355ms]
Jul 27 11:27:03.246: INFO: Created: latency-svc-4bwnr
Jul 27 11:27:03.325: INFO: Got endpoints: latency-svc-4bwnr [931.262658ms]
Jul 27 11:27:03.326: INFO: Created: latency-svc-lsbt8
Jul 27 11:27:03.341: INFO: Got endpoints: latency-svc-lsbt8 [861.938971ms]
Jul 27 11:27:03.403: INFO: Created: latency-svc-phlpm
Jul 27 11:27:03.497: INFO: Got endpoints: latency-svc-phlpm [982.433496ms]
Jul 27 11:27:03.516: INFO: Created: latency-svc-zbz2n
Jul 27 11:27:03.546: INFO: Got endpoints: latency-svc-zbz2n [990.284316ms]
Jul 27 11:27:03.577: INFO: Created: latency-svc-z2mbf
Jul 27 11:27:03.593: INFO: Got endpoints: latency-svc-z2mbf [964.397847ms]
Jul 27 11:27:03.659: INFO: Created: latency-svc-zzgr6
Jul 27 11:27:03.714: INFO: Got endpoints: latency-svc-zzgr6 [1.048844609s]
Jul 27 11:27:03.757: INFO: Created: latency-svc-vzqmd
Jul 27 11:27:03.826: INFO: Got endpoints: latency-svc-vzqmd [1.130171167s]
Jul 27 11:27:03.829: INFO: Created: latency-svc-f9t69
Jul 27 11:27:03.833: INFO: Got endpoints: latency-svc-f9t69 [1.054562178s]
Jul 27 11:27:03.870: INFO: Created: latency-svc-blg2c
Jul 27 11:27:03.927: INFO: Got endpoints: latency-svc-blg2c [1.118796902s]
Jul 27 11:27:03.979: INFO: Created: latency-svc-2dbp8
Jul 27 11:27:03.984: INFO: Got endpoints: latency-svc-2dbp8 [1.085568059s]
Jul 27 11:27:04.014: INFO: Created: latency-svc-n9kjg
Jul 27 11:27:04.063: INFO: Got endpoints: latency-svc-n9kjg [1.12276091s]
Jul 27 11:27:04.126: INFO: Created: latency-svc-7vdv2
Jul 27 11:27:04.134: INFO: Got endpoints: latency-svc-7vdv2 [1.091873891s]
Jul 27 11:27:04.158: INFO: Created: latency-svc-7tr5q
Jul 27 11:27:04.171: INFO: Got endpoints: latency-svc-7tr5q [1.095567057s]
Jul 27 11:27:04.189: INFO: Created: latency-svc-hqd7f
Jul 27 11:27:04.202: INFO: Got endpoints: latency-svc-hqd7f [1.071862795s]
Jul 27 11:27:04.218: INFO: Created: latency-svc-bsvwj
Jul 27 11:27:04.299: INFO: Got endpoints: latency-svc-bsvwj [1.091573792s]
Jul 27 11:27:04.320: INFO: Created: latency-svc-5z8z4
Jul 27 11:27:04.335: INFO: Got endpoints: latency-svc-5z8z4 [1.010010108s]
Jul 27 11:27:04.362: INFO: Created: latency-svc-f2rmp
Jul 27 11:27:04.497: INFO: Got endpoints: latency-svc-f2rmp [1.156202515s]
Jul 27 11:27:04.518: INFO: Created: latency-svc-grrcb
Jul 27 11:27:04.543: INFO: Got endpoints: latency-svc-grrcb [1.04625175s]
Jul 27 11:27:04.572: INFO: Created: latency-svc-md68h
Jul 27 11:27:04.647: INFO: Got endpoints: latency-svc-md68h [1.100664283s]
Jul 27 11:27:04.662: INFO: Created: latency-svc-mxh5z
Jul 27 11:27:04.675: INFO: Got endpoints: latency-svc-mxh5z [1.082234773s]
Jul 27 11:27:04.705: INFO: Created: latency-svc-ldrbx
Jul 27 11:27:04.718: INFO: Got endpoints: latency-svc-ldrbx [1.003490161s]
Jul 27 11:27:04.746: INFO: Created: latency-svc-klpvv
Jul 27 11:27:04.855: INFO: Created: latency-svc-k79rp
Jul 27 11:27:04.856: INFO: Got endpoints: latency-svc-klpvv [1.02950906s]
Jul 27 11:27:04.921: INFO: Got endpoints: latency-svc-k79rp [1.088005471s]
Jul 27 11:27:05.025: INFO: Created: latency-svc-5wrtq
Jul 27 11:27:05.077: INFO: Got endpoints: latency-svc-5wrtq [1.149965988s]
Jul 27 11:27:05.426: INFO: Created: latency-svc-5c79c
Jul 27 11:27:05.611: INFO: Got endpoints: latency-svc-5c79c [1.626907569s]
Jul 27 11:27:05.615: INFO: Created: latency-svc-wtpkj
Jul 27 11:27:05.660: INFO: Got endpoints: latency-svc-wtpkj [583.846824ms]
Jul 27 11:27:05.767: INFO: Created: latency-svc-2jmk9
Jul 27 11:27:06.007: INFO: Got endpoints: latency-svc-2jmk9 [1.944441327s]
Jul 27 11:27:06.013: INFO: Created: latency-svc-xx7rm
Jul 27 11:27:06.021: INFO: Got endpoints: latency-svc-xx7rm [1.886407353s]
Jul 27 11:27:06.167: INFO: Created: latency-svc-4p8wf
Jul 27 11:27:06.189: INFO: Got endpoints: latency-svc-4p8wf [2.017542072s]
Jul 27 11:27:06.230: INFO: Created: latency-svc-cd7sj
Jul 27 11:27:06.242: INFO: Got endpoints: latency-svc-cd7sj [2.040418339s]
Jul 27 11:27:06.266: INFO: Created: latency-svc-8qqvt
Jul 27 11:27:06.318: INFO: Got endpoints: latency-svc-8qqvt [2.018522194s]
Jul 27 11:27:06.363: INFO: Created: latency-svc-zm95r
Jul 27 11:27:06.386: INFO: Got endpoints: latency-svc-zm95r [2.050877411s]
Jul 27 11:27:06.726: INFO: Created: latency-svc-fgh85
Jul 27 11:27:06.764: INFO: Created: latency-svc-frhhl
Jul 27 11:27:06.764: INFO: Got endpoints: latency-svc-fgh85 [2.267494334s]
Jul 27 11:27:06.795: INFO: Got endpoints: latency-svc-frhhl [2.2521297s]
Jul 27 11:27:06.869: INFO: Created: latency-svc-w9mqm
Jul 27 11:27:06.879: INFO: Got endpoints: latency-svc-w9mqm [2.232389274s]
Jul 27 11:27:06.896: INFO: Created: latency-svc-spfnd
Jul 27 11:27:06.928: INFO: Got endpoints: latency-svc-spfnd [2.253078464s]
Jul 27 11:27:06.950: INFO: Created: latency-svc-zfxh5
Jul 27 11:27:06.964: INFO: Got endpoints: latency-svc-zfxh5 [2.245854847s]
Jul 27 11:27:07.052: INFO: Created: latency-svc-sdbqb
Jul 27 11:27:07.066: INFO: Got endpoints: latency-svc-sdbqb [2.21063176s]
Jul 27 11:27:07.186: INFO: Created: latency-svc-xzc8l
Jul 27 11:27:07.218: INFO: Got endpoints: latency-svc-xzc8l [2.296556528s]
Jul 27 11:27:07.262: INFO: Created: latency-svc-djt9s
Jul 27 11:27:07.323: INFO: Got endpoints: latency-svc-djt9s [1.712035747s]
Jul 27 11:27:07.346: INFO: Created: latency-svc-7vzrs
Jul 27 11:27:07.361: INFO: Got endpoints: latency-svc-7vzrs [1.699992004s]
Jul 27 11:27:07.544: INFO: Created: latency-svc-wjrcz
Jul 27 11:27:07.677: INFO: Got endpoints: latency-svc-wjrcz [1.66939236s]
Jul 27 11:27:07.692: INFO: Created: latency-svc-7fs5l
Jul 27 11:27:07.702: INFO: Got endpoints: latency-svc-7fs5l [1.681658514s]
Jul 27 11:27:07.737: INFO: Created: latency-svc-tds9v
Jul 27 11:27:07.758: INFO: Got endpoints: latency-svc-tds9v [1.568817504s]
Jul 27 11:27:07.826: INFO: Created: latency-svc-f6cz7
Jul 27 11:27:07.835: INFO: Got endpoints: latency-svc-f6cz7 [1.593227932s]
Jul 27 11:27:07.862: INFO: Created: latency-svc-22424
Jul 27 11:27:07.890: INFO: Got endpoints: latency-svc-22424 [1.571768666s]
Jul 27 11:27:07.917: INFO: Created: latency-svc-7tr8s
Jul 27 11:27:07.970: INFO: Got endpoints: latency-svc-7tr8s [1.584653184s]
Jul 27 11:27:07.995: INFO: Created: latency-svc-96v2h
Jul 27 11:27:08.010: INFO: Got endpoints: latency-svc-96v2h [1.2454085s]
Jul 27 11:27:08.031: INFO: Created: latency-svc-4vsrj
Jul 27 11:27:08.060: INFO: Got endpoints: latency-svc-4vsrj [1.264460997s]
Jul 27 11:27:08.127: INFO: Created: latency-svc-45w4v
Jul 27 11:27:08.161: INFO: Got endpoints: latency-svc-45w4v [1.281489369s]
Jul 27 11:27:08.210: INFO: Created: latency-svc-zgtw5
Jul 27 11:27:08.257: INFO: Got endpoints: latency-svc-zgtw5 [1.329038199s]
Jul 27 11:27:08.270: INFO: Created: latency-svc-dcbfl
Jul 27 11:27:08.287: INFO: Got endpoints: latency-svc-dcbfl [1.323218716s]
Jul 27 11:27:08.330: INFO: Created: latency-svc-8nnxg
Jul 27 11:27:08.341: INFO: Got endpoints: latency-svc-8nnxg [1.275244642s]
Jul 27 11:27:08.390: INFO: Created: latency-svc-wf99j
Jul 27 11:27:08.397: INFO: Got endpoints: latency-svc-wf99j [1.179052517s]
Jul 27 11:27:08.420: INFO: Created: latency-svc-9vh75
Jul 27 11:27:08.446: INFO: Got endpoints: latency-svc-9vh75 [1.122465681s]
Jul 27 11:27:08.480: INFO: Created: latency-svc-df9p5
Jul 27 11:27:08.521: INFO: Got endpoints: latency-svc-df9p5 [1.160745577s]
Jul 27 11:27:08.552: INFO: Created: latency-svc-xvjft
Jul 27 11:27:08.588: INFO: Got endpoints: latency-svc-xvjft [911.497067ms]
Jul 27 11:27:08.612: INFO: Created: latency-svc-8nq52
Jul 27 11:27:08.646: INFO: Got endpoints: latency-svc-8nq52 [943.959046ms]
Jul 27 11:27:08.684: INFO: Created: latency-svc-qxtvr
Jul 27 11:27:08.698: INFO: Got endpoints: latency-svc-qxtvr [940.144482ms]
Jul 27 11:27:08.744: INFO: Created: latency-svc-z5x6x
Jul 27 11:27:08.778: INFO: Got endpoints: latency-svc-z5x6x [942.859316ms]
Jul 27 11:27:08.789: INFO: Created: latency-svc-55ldc
Jul 27 11:27:08.817: INFO: Got endpoints: latency-svc-55ldc [926.992429ms]
Jul 27 11:27:08.852: INFO: Created: latency-svc-drcwf
Jul 27 11:27:08.870: INFO: Got endpoints: latency-svc-drcwf [899.926615ms]
Jul 27 11:27:08.927: INFO: Created: latency-svc-d6dbc
Jul 27 11:27:08.954: INFO: Got endpoints: latency-svc-d6dbc [943.757576ms]
Jul 27 11:27:08.956: INFO: Created: latency-svc-b94kt
Jul 27 11:27:08.991: INFO: Got endpoints: latency-svc-b94kt [930.938785ms]
Jul 27 11:27:09.061: INFO: Created: latency-svc-wnwxf
Jul 27 11:27:09.063: INFO: Got endpoints: latency-svc-wnwxf [902.20829ms]
Jul 27 11:27:09.116: INFO: Created: latency-svc-k7xrn
Jul 27 11:27:09.145: INFO: Got endpoints: latency-svc-k7xrn [887.144004ms]
Jul 27 11:27:09.197: INFO: Created: latency-svc-4xlc4
Jul 27 11:27:09.260: INFO: Got endpoints: latency-svc-4xlc4 [972.988039ms]
Jul 27 11:27:09.260: INFO: Created: latency-svc-sfk2j
Jul 27 11:27:09.296: INFO: Got endpoints: latency-svc-sfk2j [954.581131ms]
Jul 27 11:27:09.347: INFO: Created: latency-svc-szgtj
Jul 27 11:27:09.374: INFO: Got endpoints: latency-svc-szgtj [977.040753ms]
Jul 27 11:27:09.375: INFO: Created: latency-svc-knxr7
Jul 27 11:27:09.384: INFO: Got endpoints: latency-svc-knxr7 [938.625128ms]
Jul 27 11:27:09.409: INFO: Created: latency-svc-lsqvj
Jul 27 11:27:09.421: INFO: Got endpoints: latency-svc-lsqvj [899.505043ms]
Jul 27 11:27:09.440: INFO: Created: latency-svc-rbgs4
Jul 27 11:27:09.483: INFO: Got endpoints: latency-svc-rbgs4 [895.214554ms]
Jul 27 11:27:09.485: INFO: Created: latency-svc-lz9bm
Jul 27 11:27:09.506: INFO: Got endpoints: latency-svc-lz9bm [859.951279ms]
Jul 27 11:27:09.536: INFO: Created: latency-svc-b5t75
Jul 27 11:27:09.548: INFO: Got endpoints: latency-svc-b5t75 [850.353808ms]
Jul 27 11:27:09.566: INFO: Created: latency-svc-qzwkt
Jul 27 11:27:09.579: INFO: Got endpoints: latency-svc-qzwkt [800.40798ms]
Jul 27 11:27:09.635: INFO: Created: latency-svc-qs5xn
Jul 27 11:27:09.645: INFO: Got endpoints: latency-svc-qs5xn [827.967775ms]
Jul 27 11:27:09.698: INFO: Created: latency-svc-9jbsf
Jul 27 11:27:09.723: INFO: Got endpoints: latency-svc-9jbsf [852.685562ms]
Jul 27 11:27:09.779: INFO: Created: latency-svc-sktcf
Jul 27 11:27:09.789: INFO: Got endpoints: latency-svc-sktcf [834.90841ms]
Jul 27 11:27:09.825: INFO: Created: latency-svc-wv85n
Jul 27 11:27:09.837: INFO: Got endpoints: latency-svc-wv85n [846.388253ms]
Jul 27 11:27:09.854: INFO: Created: latency-svc-dw8dv
Jul 27 11:27:09.867: INFO: Got endpoints: latency-svc-dw8dv [804.592525ms]
Jul 27 11:27:09.910: INFO: Created: latency-svc-wswcm
Jul 27 11:27:09.945: INFO: Got endpoints: latency-svc-wswcm [799.867113ms]
Jul 27 11:27:09.946: INFO: Created: latency-svc-cwj7s
Jul 27 11:27:09.986: INFO: Got endpoints: latency-svc-cwj7s [725.878778ms]
Jul 27 11:27:10.048: INFO: Created: latency-svc-pjgsr
Jul 27 11:27:10.058: INFO: Got endpoints: latency-svc-pjgsr [761.666181ms]
Jul 27 11:27:10.088: INFO: Created: latency-svc-wbsr5
Jul 27 11:27:10.103: INFO: Got endpoints: latency-svc-wbsr5 [729.301129ms]
Jul 27 11:27:10.124: INFO: Created: latency-svc-6xg8b
Jul 27 11:27:10.134: INFO: Got endpoints: latency-svc-6xg8b [749.802704ms]
Jul 27 11:27:10.198: INFO: Created: latency-svc-z8mnn
Jul 27 11:27:10.201: INFO: Got endpoints: latency-svc-z8mnn [779.615009ms]
Jul 27 11:27:10.274: INFO: Created: latency-svc-rd4qb
Jul 27 11:27:10.365: INFO: Got endpoints: latency-svc-rd4qb [881.88087ms]
Jul 27 11:27:10.368: INFO: Created: latency-svc-8l8l8
Jul 27 11:27:10.388: INFO: Got endpoints: latency-svc-8l8l8 [882.091909ms]
Jul 27 11:27:10.418: INFO: Created: latency-svc-cfkrp
Jul 27 11:27:10.434: INFO: Got endpoints: latency-svc-cfkrp [886.022737ms]
Jul 27 11:27:10.454: INFO: Created: latency-svc-5wrpm
Jul 27 11:27:10.533: INFO: Got endpoints: latency-svc-5wrpm [954.444799ms]
Jul 27 11:27:10.556: INFO: Created: latency-svc-w2fcc
Jul 27 11:27:10.571: INFO: Got endpoints: latency-svc-w2fcc [926.491505ms]
Jul 27 11:27:10.598: INFO: Created: latency-svc-fzdld
Jul 27 11:27:10.633: INFO: Got endpoints: latency-svc-fzdld [910.17855ms]
Jul 27 11:27:10.695: INFO: Created: latency-svc-kv92w
Jul 27 11:27:10.704: INFO: Got endpoints: latency-svc-kv92w [915.531777ms]
Jul 27 11:27:10.748: INFO: Created: latency-svc-8mj6k
Jul 27 11:27:10.765: INFO: Got endpoints: latency-svc-8mj6k [927.400134ms]
Jul 27 11:27:10.784: INFO: Created: latency-svc-wgmgv
Jul 27 11:27:10.826: INFO: Got endpoints: latency-svc-wgmgv [958.613422ms]
Jul 27 11:27:10.844: INFO: Created: latency-svc-bk8kb
Jul 27 11:27:10.874: INFO: Got endpoints: latency-svc-bk8kb [929.667636ms]
Jul 27 11:27:10.910: INFO: Created: latency-svc-spq6c
Jul 27 11:27:10.921: INFO: Got endpoints: latency-svc-spq6c [935.221806ms]
Jul 27 11:27:10.921: INFO: Latencies: [417.995618ms 544.761083ms 583.846824ms 725.878778ms 729.301129ms 749.802704ms 761.666181ms 779.615009ms 781.961156ms 784.671354ms 796.139259ms 799.867113ms 800.40798ms 804.592525ms 813.832444ms 824.146032ms 827.967775ms 829.578364ms 834.90841ms 835.428118ms 838.444684ms 841.275806ms 846.388253ms 849.147733ms 849.966355ms 850.353808ms 852.654873ms 852.685562ms 859.951279ms 861.938971ms 871.809582ms 872.766892ms 874.841434ms 881.88087ms 882.091909ms 886.022737ms 887.144004ms 891.17229ms 895.214554ms 899.505043ms 899.926615ms 902.20829ms 902.583667ms 910.17855ms 911.497067ms 912.526402ms 912.634237ms 915.319902ms 915.531777ms 921.798484ms 922.339548ms 922.40772ms 924.310828ms 926.491505ms 926.786976ms 926.992429ms 927.400134ms 929.667636ms 929.690994ms 930.938785ms 931.262658ms 935.221806ms 938.625128ms 938.997514ms 940.144482ms 942.859316ms 943.757576ms 943.959046ms 944.11698ms 945.130507ms 954.440316ms 954.444799ms 954.581131ms 958.613422ms 964.397847ms 969.786052ms 970.444311ms 972.054993ms 972.687723ms 972.988039ms 977.040753ms 980.460894ms 981.491314ms 982.433496ms 990.284316ms 999.695153ms 1.003490161s 1.010010108s 1.027153219s 1.02950906s 1.04625175s 1.048844609s 1.053501428s 1.054562178s 1.071862795s 1.082234773s 1.085568059s 1.088005471s 1.091573792s 1.091873891s 1.095567057s 1.100664283s 1.118796902s 1.122465681s 1.12276091s 1.130171167s 1.149965988s 1.15535213s 1.156202515s 1.160745577s 1.179052517s 1.222350529s 1.22819983s 1.2454085s 1.264460997s 1.275244642s 1.281489369s 1.323218716s 1.329038199s 1.346233454s 1.353682655s 1.400446591s 1.40698131s 1.412789387s 1.425503752s 1.555636102s 1.566762963s 1.568817504s 1.571768666s 1.584653184s 1.593227932s 1.622237246s 1.626907569s 1.66939236s 1.678836665s 1.681658514s 1.699992004s 1.712035747s 1.75221876s 1.769876502s 1.80306679s 1.810414312s 1.814479801s 1.82097024s 1.856898499s 1.886407353s 1.905746466s 1.911405667s 1.927553983s 1.944441327s 1.972375603s 1.978733138s 1.988819897s 1.998310719s 2.017542072s 2.018522194s 2.030105652s 2.037364893s 2.040418339s 2.0480194s 2.050877411s 2.051543875s 2.181630296s 2.21063176s 2.22605214s 2.232389274s 2.245854847s 2.2521297s 2.253078464s 2.267494334s 2.296556528s 2.306969581s 2.322185242s 2.35658097s 2.364808608s 2.39428145s 2.400684215s 2.432698369s 2.459705905s 2.467249004s 2.516941578s 2.539086318s 2.539342994s 2.577814883s 2.616572141s 2.639381798s 2.698132647s 2.851574921s 3.022610607s 3.285427552s 3.382643839s 3.607800435s 3.688082695s 3.698793901s 3.749934424s 4.002841675s 4.021140364s 4.093333714s 4.188654036s 4.398032279s]
Jul 27 11:27:10.921: INFO: 50 %ile: 1.095567057s
Jul 27 11:27:10.921: INFO: 90 %ile: 2.516941578s
Jul 27 11:27:10.921: INFO: 99 %ile: 4.188654036s
Jul 27 11:27:10.921: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:27:10.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8361" for this suite.

• [SLOW TEST:24.868 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":193,"skipped":3467,"failed":0}
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:27:10.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9190.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9190.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9190.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9190.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9190.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9190.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9190.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9190.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9190.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9190.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9190.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 114.57.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.57.114_udp@PTR;check="$$(dig +tcp +noall +answer +search 114.57.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.57.114_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9190.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9190.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9190.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9190.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9190.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9190.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9190.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9190.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9190.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9190.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9190.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 114.57.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.57.114_udp@PTR;check="$$(dig +tcp +noall +answer +search 114.57.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.57.114_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jul 27 11:27:19.355: INFO: Unable to read wheezy_udp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:19.360: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:19.386: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:19.403: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:19.594: INFO: Unable to read jessie_udp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:19.600: INFO: Unable to read jessie_tcp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:19.606: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:19.636: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:19.680: INFO: Lookups using dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d failed for: [wheezy_udp@dns-test-service.dns-9190.svc.cluster.local wheezy_tcp@dns-test-service.dns-9190.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local jessie_udp@dns-test-service.dns-9190.svc.cluster.local jessie_tcp@dns-test-service.dns-9190.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local]

Jul 27 11:27:24.703: INFO: Unable to read wheezy_udp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:24.709: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:24.810: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:24.821: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:25.013: INFO: Unable to read jessie_udp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:25.115: INFO: Unable to read jessie_tcp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:25.128: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:25.133: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:25.193: INFO: Lookups using dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d failed for: [wheezy_udp@dns-test-service.dns-9190.svc.cluster.local wheezy_tcp@dns-test-service.dns-9190.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local jessie_udp@dns-test-service.dns-9190.svc.cluster.local jessie_tcp@dns-test-service.dns-9190.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local]

Jul 27 11:27:29.683: INFO: Unable to read wheezy_udp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:29.732: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:29.741: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:29.750: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:29.869: INFO: Unable to read jessie_udp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:29.880: INFO: Unable to read jessie_tcp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:29.883: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:29.913: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:30.051: INFO: Lookups using dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d failed for: [wheezy_udp@dns-test-service.dns-9190.svc.cluster.local wheezy_tcp@dns-test-service.dns-9190.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local jessie_udp@dns-test-service.dns-9190.svc.cluster.local jessie_tcp@dns-test-service.dns-9190.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local]

Jul 27 11:27:34.703: INFO: Unable to read wheezy_udp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:34.707: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:34.887: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:34.913: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:35.532: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: Get https://172.30.12.66:35995/api/v1/namespaces/dns-9190/pods/dns-test-d4558207-82fe-4566-879d-16fe8228b59d/proxy/results/wheezy_udp@_http._tcp.test-service-2.dns-9190.svc.cluster.local: stream error: stream ID 10541; INTERNAL_ERROR
Jul 27 11:27:35.938: INFO: Unable to read jessie_udp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:35.944: INFO: Unable to read jessie_tcp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:36.242: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:36.432: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:36.681: INFO: Lookups using dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d failed for: [wheezy_udp@dns-test-service.dns-9190.svc.cluster.local wheezy_tcp@dns-test-service.dns-9190.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-9190.svc.cluster.local jessie_udp@dns-test-service.dns-9190.svc.cluster.local jessie_tcp@dns-test-service.dns-9190.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local]

Jul 27 11:27:39.779: INFO: Unable to read wheezy_udp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:39.789: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:39.823: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:39.870: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:40.137: INFO: Unable to read jessie_udp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:40.198: INFO: Unable to read jessie_tcp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:40.226: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:40.266: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:40.393: INFO: Lookups using dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d failed for: [wheezy_udp@dns-test-service.dns-9190.svc.cluster.local wheezy_tcp@dns-test-service.dns-9190.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local jessie_udp@dns-test-service.dns-9190.svc.cluster.local jessie_tcp@dns-test-service.dns-9190.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local]

Jul 27 11:27:44.685: INFO: Unable to read wheezy_udp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:44.689: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:44.693: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:44.719: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:44.741: INFO: Unable to read jessie_udp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:44.744: INFO: Unable to read jessie_tcp@dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:44.747: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:44.750: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local from pod dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d: the server could not find the requested resource (get pods dns-test-d4558207-82fe-4566-879d-16fe8228b59d)
Jul 27 11:27:44.774: INFO: Lookups using dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d failed for: [wheezy_udp@dns-test-service.dns-9190.svc.cluster.local wheezy_tcp@dns-test-service.dns-9190.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local jessie_udp@dns-test-service.dns-9190.svc.cluster.local jessie_tcp@dns-test-service.dns-9190.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9190.svc.cluster.local]

Jul 27 11:27:49.838: INFO: DNS probes using dns-9190/dns-test-d4558207-82fe-4566-879d-16fe8228b59d succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:27:50.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9190" for this suite.

• [SLOW TEST:39.920 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":275,"completed":194,"skipped":3467,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:27:50.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service nodeport-service with the type=NodePort in namespace services-7646
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-7646
STEP: creating replication controller externalsvc in namespace services-7646
I0727 11:27:51.676614       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-7646, replica count: 2
I0727 11:27:54.727251       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0727 11:27:57.727492       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0727 11:28:00.727709       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Jul 27 11:28:00.810: INFO: Creating new exec pod
Jul 27 11:28:04.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-7646 execpodthmqp -- /bin/sh -x -c nslookup nodeport-service'
Jul 27 11:28:05.089: INFO: stderr: "I0727 11:28:04.983392    2286 log.go:172] (0xc000b12160) (0xc000912000) Create stream\nI0727 11:28:04.983442    2286 log.go:172] (0xc000b12160) (0xc000912000) Stream added, broadcasting: 1\nI0727 11:28:04.985891    2286 log.go:172] (0xc000b12160) Reply frame received for 1\nI0727 11:28:04.986356    2286 log.go:172] (0xc000b12160) (0xc000aec460) Create stream\nI0727 11:28:04.986388    2286 log.go:172] (0xc000b12160) (0xc000aec460) Stream added, broadcasting: 3\nI0727 11:28:04.988365    2286 log.go:172] (0xc000b12160) Reply frame received for 3\nI0727 11:28:04.988685    2286 log.go:172] (0xc000b12160) (0xc000aec000) Create stream\nI0727 11:28:04.988712    2286 log.go:172] (0xc000b12160) (0xc000aec000) Stream added, broadcasting: 5\nI0727 11:28:04.989568    2286 log.go:172] (0xc000b12160) Reply frame received for 5\nI0727 11:28:05.076244    2286 log.go:172] (0xc000b12160) Data frame received for 5\nI0727 11:28:05.076268    2286 log.go:172] (0xc000aec000) (5) Data frame handling\nI0727 11:28:05.076283    2286 log.go:172] (0xc000aec000) (5) Data frame sent\n+ nslookup nodeport-service\nI0727 11:28:05.081690    2286 log.go:172] (0xc000b12160) Data frame received for 3\nI0727 11:28:05.081714    2286 log.go:172] (0xc000aec460) (3) Data frame handling\nI0727 11:28:05.081730    2286 log.go:172] (0xc000aec460) (3) Data frame sent\nI0727 11:28:05.082408    2286 log.go:172] (0xc000b12160) Data frame received for 3\nI0727 11:28:05.082426    2286 log.go:172] (0xc000aec460) (3) Data frame handling\nI0727 11:28:05.082439    2286 log.go:172] (0xc000aec460) (3) Data frame sent\nI0727 11:28:05.082885    2286 log.go:172] (0xc000b12160) Data frame received for 3\nI0727 11:28:05.082909    2286 log.go:172] (0xc000aec460) (3) Data frame handling\nI0727 11:28:05.082930    2286 log.go:172] (0xc000b12160) Data frame received for 5\nI0727 11:28:05.082950    2286 log.go:172] (0xc000aec000) (5) Data frame handling\nI0727 11:28:05.084268    2286 log.go:172] (0xc000b12160) Data frame received for 1\nI0727 11:28:05.084286    2286 log.go:172] (0xc000912000) (1) Data frame handling\nI0727 11:28:05.084296    2286 log.go:172] (0xc000912000) (1) Data frame sent\nI0727 11:28:05.084313    2286 log.go:172] (0xc000b12160) (0xc000912000) Stream removed, broadcasting: 1\nI0727 11:28:05.084435    2286 log.go:172] (0xc000b12160) Go away received\nI0727 11:28:05.084641    2286 log.go:172] (0xc000b12160) (0xc000912000) Stream removed, broadcasting: 1\nI0727 11:28:05.084662    2286 log.go:172] (0xc000b12160) (0xc000aec460) Stream removed, broadcasting: 3\nI0727 11:28:05.084672    2286 log.go:172] (0xc000b12160) (0xc000aec000) Stream removed, broadcasting: 5\n"
Jul 27 11:28:05.089: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7646.svc.cluster.local\tcanonical name = externalsvc.services-7646.svc.cluster.local.\nName:\texternalsvc.services-7646.svc.cluster.local\nAddress: 10.105.14.36\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-7646, will wait for the garbage collector to delete the pods
Jul 27 11:28:05.150: INFO: Deleting ReplicationController externalsvc took: 6.771182ms
Jul 27 11:28:05.550: INFO: Terminating ReplicationController externalsvc pods took: 400.301718ms
Jul 27 11:28:13.508: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:28:13.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7646" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:22.657 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":195,"skipped":3482,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:28:13.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jul 27 11:28:18.195: INFO: Successfully updated pod "labelsupdate855fb779-35ac-4203-a26a-d320dbf542c4"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:28:22.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2650" for this suite.

• [SLOW TEST:8.695 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3484,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:28:22.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-4272a8c6-2783-45db-8820-9f94d91bb334
STEP: Creating a pod to test consume secrets
Jul 27 11:28:22.355: INFO: Waiting up to 5m0s for pod "pod-secrets-f9bd1d57-dd12-476a-ad7c-7548edf6dab4" in namespace "secrets-9042" to be "Succeeded or Failed"
Jul 27 11:28:22.377: INFO: Pod "pod-secrets-f9bd1d57-dd12-476a-ad7c-7548edf6dab4": Phase="Pending", Reason="", readiness=false. Elapsed: 22.118146ms
Jul 27 11:28:24.505: INFO: Pod "pod-secrets-f9bd1d57-dd12-476a-ad7c-7548edf6dab4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149152888s
Jul 27 11:28:26.508: INFO: Pod "pod-secrets-f9bd1d57-dd12-476a-ad7c-7548edf6dab4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152465737s
STEP: Saw pod success
Jul 27 11:28:26.508: INFO: Pod "pod-secrets-f9bd1d57-dd12-476a-ad7c-7548edf6dab4" satisfied condition "Succeeded or Failed"
Jul 27 11:28:26.511: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-f9bd1d57-dd12-476a-ad7c-7548edf6dab4 container secret-volume-test: 
STEP: delete the pod
Jul 27 11:28:26.589: INFO: Waiting for pod pod-secrets-f9bd1d57-dd12-476a-ad7c-7548edf6dab4 to disappear
Jul 27 11:28:26.593: INFO: Pod pod-secrets-f9bd1d57-dd12-476a-ad7c-7548edf6dab4 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:28:26.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9042" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":197,"skipped":3515,"failed":0}
SSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:28:26.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Jul 27 11:28:26.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8139'
Jul 27 11:28:26.994: INFO: stderr: ""
Jul 27 11:28:26.994: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jul 27 11:28:28.037: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 27 11:28:28.037: INFO: Found 0 / 1
Jul 27 11:28:29.048: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 27 11:28:29.049: INFO: Found 0 / 1
Jul 27 11:28:29.998: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 27 11:28:29.998: INFO: Found 0 / 1
Jul 27 11:28:31.121: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 27 11:28:31.121: INFO: Found 1 / 1
Jul 27 11:28:31.121: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jul 27 11:28:31.124: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 27 11:28:31.124: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jul 27 11:28:31.124: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config patch pod agnhost-master-jvs8x --namespace=kubectl-8139 -p {"metadata":{"annotations":{"x":"y"}}}'
Jul 27 11:28:31.219: INFO: stderr: ""
Jul 27 11:28:31.219: INFO: stdout: "pod/agnhost-master-jvs8x patched\n"
STEP: checking annotations
Jul 27 11:28:31.262: INFO: Selector matched 1 pods for map[app:agnhost]
Jul 27 11:28:31.262: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:28:31.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8139" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":275,"completed":198,"skipped":3522,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:28:31.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Jul 27 11:28:31.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:28:47.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5841" for this suite.

• [SLOW TEST:16.075 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":199,"skipped":3529,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:28:47.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-08f41191-e639-43c0-8e8d-286d390a8208
STEP: Creating a pod to test consume configMaps
Jul 27 11:28:47.482: INFO: Waiting up to 5m0s for pod "pod-configmaps-ffe55f24-9282-4cac-8e35-58e47e0533bd" in namespace "configmap-3291" to be "Succeeded or Failed"
Jul 27 11:28:47.486: INFO: Pod "pod-configmaps-ffe55f24-9282-4cac-8e35-58e47e0533bd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.822598ms
Jul 27 11:28:49.490: INFO: Pod "pod-configmaps-ffe55f24-9282-4cac-8e35-58e47e0533bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007822214s
Jul 27 11:28:51.494: INFO: Pod "pod-configmaps-ffe55f24-9282-4cac-8e35-58e47e0533bd": Phase="Running", Reason="", readiness=true. Elapsed: 4.011802163s
Jul 27 11:28:53.499: INFO: Pod "pod-configmaps-ffe55f24-9282-4cac-8e35-58e47e0533bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016701771s
STEP: Saw pod success
Jul 27 11:28:53.499: INFO: Pod "pod-configmaps-ffe55f24-9282-4cac-8e35-58e47e0533bd" satisfied condition "Succeeded or Failed"
Jul 27 11:28:53.502: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-ffe55f24-9282-4cac-8e35-58e47e0533bd container configmap-volume-test: 
STEP: delete the pod
Jul 27 11:28:53.541: INFO: Waiting for pod pod-configmaps-ffe55f24-9282-4cac-8e35-58e47e0533bd to disappear
Jul 27 11:28:53.575: INFO: Pod pod-configmaps-ffe55f24-9282-4cac-8e35-58e47e0533bd no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:28:53.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3291" for this suite.

• [SLOW TEST:6.236 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3539,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:28:53.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
Jul 27 11:28:53.667: INFO: Waiting up to 5m0s for pod "pod-fa82fc37-c31a-4c8b-be43-82e7c3a51a3d" in namespace "emptydir-6113" to be "Succeeded or Failed"
Jul 27 11:28:53.732: INFO: Pod "pod-fa82fc37-c31a-4c8b-be43-82e7c3a51a3d": Phase="Pending", Reason="", readiness=false. Elapsed: 64.019685ms
Jul 27 11:28:55.735: INFO: Pod "pod-fa82fc37-c31a-4c8b-be43-82e7c3a51a3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067788766s
Jul 27 11:28:57.740: INFO: Pod "pod-fa82fc37-c31a-4c8b-be43-82e7c3a51a3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072007549s
STEP: Saw pod success
Jul 27 11:28:57.740: INFO: Pod "pod-fa82fc37-c31a-4c8b-be43-82e7c3a51a3d" satisfied condition "Succeeded or Failed"
Jul 27 11:28:57.746: INFO: Trying to get logs from node kali-worker2 pod pod-fa82fc37-c31a-4c8b-be43-82e7c3a51a3d container test-container: 
STEP: delete the pod
Jul 27 11:28:57.769: INFO: Waiting for pod pod-fa82fc37-c31a-4c8b-be43-82e7c3a51a3d to disappear
Jul 27 11:28:57.774: INFO: Pod pod-fa82fc37-c31a-4c8b-be43-82e7c3a51a3d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:28:57.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6113" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":201,"skipped":3540,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:28:57.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:29:32.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-156" for this suite.

• [SLOW TEST:35.199 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3555,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:29:32.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-b203a0fe-ff87-4dcb-96a4-e7ddb6f84e99
STEP: Creating a pod to test consume configMaps
Jul 27 11:29:33.095: INFO: Waiting up to 5m0s for pod "pod-configmaps-544f611d-8a64-43ff-8867-1d267dec6bf9" in namespace "configmap-5119" to be "Succeeded or Failed"
Jul 27 11:29:33.189: INFO: Pod "pod-configmaps-544f611d-8a64-43ff-8867-1d267dec6bf9": Phase="Pending", Reason="", readiness=false. Elapsed: 94.786229ms
Jul 27 11:29:35.194: INFO: Pod "pod-configmaps-544f611d-8a64-43ff-8867-1d267dec6bf9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099081832s
Jul 27 11:29:37.198: INFO: Pod "pod-configmaps-544f611d-8a64-43ff-8867-1d267dec6bf9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.103581469s
STEP: Saw pod success
Jul 27 11:29:37.198: INFO: Pod "pod-configmaps-544f611d-8a64-43ff-8867-1d267dec6bf9" satisfied condition "Succeeded or Failed"
Jul 27 11:29:37.201: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-544f611d-8a64-43ff-8867-1d267dec6bf9 container configmap-volume-test: 
STEP: delete the pod
Jul 27 11:29:37.374: INFO: Waiting for pod pod-configmaps-544f611d-8a64-43ff-8867-1d267dec6bf9 to disappear
Jul 27 11:29:37.439: INFO: Pod pod-configmaps-544f611d-8a64-43ff-8867-1d267dec6bf9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:29:37.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5119" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3603,"failed":0}
SSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:29:37.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-d2b990e8-9ba5-4d18-8d59-9d6db0288e7f
STEP: Creating a pod to test consume configMaps
Jul 27 11:29:37.645: INFO: Waiting up to 5m0s for pod "pod-configmaps-c2b8ee10-a015-4d13-87b8-3dde2bb9fbb3" in namespace "configmap-2624" to be "Succeeded or Failed"
Jul 27 11:29:37.701: INFO: Pod "pod-configmaps-c2b8ee10-a015-4d13-87b8-3dde2bb9fbb3": Phase="Pending", Reason="", readiness=false. Elapsed: 56.615221ms
Jul 27 11:29:39.705: INFO: Pod "pod-configmaps-c2b8ee10-a015-4d13-87b8-3dde2bb9fbb3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060361015s
Jul 27 11:29:41.732: INFO: Pod "pod-configmaps-c2b8ee10-a015-4d13-87b8-3dde2bb9fbb3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087018784s
STEP: Saw pod success
Jul 27 11:29:41.732: INFO: Pod "pod-configmaps-c2b8ee10-a015-4d13-87b8-3dde2bb9fbb3" satisfied condition "Succeeded or Failed"
Jul 27 11:29:41.735: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-c2b8ee10-a015-4d13-87b8-3dde2bb9fbb3 container configmap-volume-test: 
STEP: delete the pod
Jul 27 11:29:41.774: INFO: Waiting for pod pod-configmaps-c2b8ee10-a015-4d13-87b8-3dde2bb9fbb3 to disappear
Jul 27 11:29:41.794: INFO: Pod pod-configmaps-c2b8ee10-a015-4d13-87b8-3dde2bb9fbb3 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:29:41.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2624" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":204,"skipped":3608,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:29:41.804: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:29:41.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9810" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":205,"skipped":3700,"failed":0}

------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:29:42.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-2700
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-2700
I0727 11:29:42.321634       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-2700, replica count: 2
I0727 11:29:45.372061       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0727 11:29:48.372278       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 27 11:29:48.372: INFO: Creating new exec pod
Jul 27 11:29:53.550: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-2700 execpodmg6hm -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jul 27 11:29:53.770: INFO: stderr: "I0727 11:29:53.683270    2350 log.go:172] (0xc000a5b8c0) (0xc0009728c0) Create stream\nI0727 11:29:53.683334    2350 log.go:172] (0xc000a5b8c0) (0xc0009728c0) Stream added, broadcasting: 1\nI0727 11:29:53.688577    2350 log.go:172] (0xc000a5b8c0) Reply frame received for 1\nI0727 11:29:53.688611    2350 log.go:172] (0xc000a5b8c0) (0xc00061d720) Create stream\nI0727 11:29:53.688620    2350 log.go:172] (0xc000a5b8c0) (0xc00061d720) Stream added, broadcasting: 3\nI0727 11:29:53.689808    2350 log.go:172] (0xc000a5b8c0) Reply frame received for 3\nI0727 11:29:53.689850    2350 log.go:172] (0xc000a5b8c0) (0xc00047ab40) Create stream\nI0727 11:29:53.689865    2350 log.go:172] (0xc000a5b8c0) (0xc00047ab40) Stream added, broadcasting: 5\nI0727 11:29:53.690737    2350 log.go:172] (0xc000a5b8c0) Reply frame received for 5\nI0727 11:29:53.759548    2350 log.go:172] (0xc000a5b8c0) Data frame received for 5\nI0727 11:29:53.759600    2350 log.go:172] (0xc00047ab40) (5) Data frame handling\nI0727 11:29:53.759631    2350 log.go:172] (0xc00047ab40) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0727 11:29:53.759874    2350 log.go:172] (0xc000a5b8c0) Data frame received for 5\nI0727 11:29:53.759911    2350 log.go:172] (0xc00047ab40) (5) Data frame handling\nI0727 11:29:53.759941    2350 log.go:172] (0xc00047ab40) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0727 11:29:53.760296    2350 log.go:172] (0xc000a5b8c0) Data frame received for 5\nI0727 11:29:53.760338    2350 log.go:172] (0xc00047ab40) (5) Data frame handling\nI0727 11:29:53.760637    2350 log.go:172] (0xc000a5b8c0) Data frame received for 3\nI0727 11:29:53.760676    2350 log.go:172] (0xc00061d720) (3) Data frame handling\nI0727 11:29:53.763115    2350 log.go:172] (0xc000a5b8c0) Data frame received for 1\nI0727 11:29:53.763146    2350 log.go:172] (0xc0009728c0) (1) Data frame handling\nI0727 11:29:53.763168    2350 log.go:172] (0xc0009728c0) (1) Data frame sent\nI0727 11:29:53.763190    2350 log.go:172] (0xc000a5b8c0) (0xc0009728c0) Stream removed, broadcasting: 1\nI0727 11:29:53.763692    2350 log.go:172] (0xc000a5b8c0) (0xc0009728c0) Stream removed, broadcasting: 1\nI0727 11:29:53.763723    2350 log.go:172] (0xc000a5b8c0) (0xc00061d720) Stream removed, broadcasting: 3\nI0727 11:29:53.763935    2350 log.go:172] (0xc000a5b8c0) (0xc00047ab40) Stream removed, broadcasting: 5\n"
Jul 27 11:29:53.770: INFO: stdout: ""
Jul 27 11:29:53.771: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-2700 execpodmg6hm -- /bin/sh -x -c nc -zv -t -w 2 10.109.50.235 80'
Jul 27 11:29:53.963: INFO: stderr: "I0727 11:29:53.899106    2372 log.go:172] (0xc0009d8790) (0xc00067d7c0) Create stream\nI0727 11:29:53.899156    2372 log.go:172] (0xc0009d8790) (0xc00067d7c0) Stream added, broadcasting: 1\nI0727 11:29:53.902022    2372 log.go:172] (0xc0009d8790) Reply frame received for 1\nI0727 11:29:53.902065    2372 log.go:172] (0xc0009d8790) (0xc00040ebe0) Create stream\nI0727 11:29:53.902073    2372 log.go:172] (0xc0009d8790) (0xc00040ebe0) Stream added, broadcasting: 3\nI0727 11:29:53.902968    2372 log.go:172] (0xc0009d8790) Reply frame received for 3\nI0727 11:29:53.902999    2372 log.go:172] (0xc0009d8790) (0xc0008cc000) Create stream\nI0727 11:29:53.903008    2372 log.go:172] (0xc0009d8790) (0xc0008cc000) Stream added, broadcasting: 5\nI0727 11:29:53.903794    2372 log.go:172] (0xc0009d8790) Reply frame received for 5\nI0727 11:29:53.955518    2372 log.go:172] (0xc0009d8790) Data frame received for 5\nI0727 11:29:53.955546    2372 log.go:172] (0xc0008cc000) (5) Data frame handling\nI0727 11:29:53.955555    2372 log.go:172] (0xc0008cc000) (5) Data frame sent\nI0727 11:29:53.955561    2372 log.go:172] (0xc0009d8790) Data frame received for 5\nI0727 11:29:53.955565    2372 log.go:172] (0xc0008cc000) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.50.235 80\nConnection to 10.109.50.235 80 port [tcp/http] succeeded!\nI0727 11:29:53.955592    2372 log.go:172] (0xc0009d8790) Data frame received for 3\nI0727 11:29:53.955640    2372 log.go:172] (0xc00040ebe0) (3) Data frame handling\nI0727 11:29:53.957524    2372 log.go:172] (0xc0009d8790) Data frame received for 1\nI0727 11:29:53.957543    2372 log.go:172] (0xc00067d7c0) (1) Data frame handling\nI0727 11:29:53.957555    2372 log.go:172] (0xc00067d7c0) (1) Data frame sent\nI0727 11:29:53.957783    2372 log.go:172] (0xc0009d8790) (0xc00067d7c0) Stream removed, broadcasting: 1\nI0727 11:29:53.957915    2372 log.go:172] (0xc0009d8790) Go away received\nI0727 11:29:53.958041    2372 log.go:172] (0xc0009d8790) (0xc00067d7c0) Stream removed, broadcasting: 1\nI0727 11:29:53.958055    2372 log.go:172] (0xc0009d8790) (0xc00040ebe0) Stream removed, broadcasting: 3\nI0727 11:29:53.958063    2372 log.go:172] (0xc0009d8790) (0xc0008cc000) Stream removed, broadcasting: 5\n"
Jul 27 11:29:53.963: INFO: stdout: ""
Jul 27 11:29:53.963: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-2700 execpodmg6hm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 31485'
Jul 27 11:29:54.153: INFO: stderr: "I0727 11:29:54.086855    2396 log.go:172] (0xc000950b00) (0xc0006cd680) Create stream\nI0727 11:29:54.086907    2396 log.go:172] (0xc000950b00) (0xc0006cd680) Stream added, broadcasting: 1\nI0727 11:29:54.089385    2396 log.go:172] (0xc000950b00) Reply frame received for 1\nI0727 11:29:54.089423    2396 log.go:172] (0xc000950b00) (0xc00063b720) Create stream\nI0727 11:29:54.089440    2396 log.go:172] (0xc000950b00) (0xc00063b720) Stream added, broadcasting: 3\nI0727 11:29:54.090180    2396 log.go:172] (0xc000950b00) Reply frame received for 3\nI0727 11:29:54.090212    2396 log.go:172] (0xc000950b00) (0xc000938000) Create stream\nI0727 11:29:54.090224    2396 log.go:172] (0xc000950b00) (0xc000938000) Stream added, broadcasting: 5\nI0727 11:29:54.090908    2396 log.go:172] (0xc000950b00) Reply frame received for 5\nI0727 11:29:54.146050    2396 log.go:172] (0xc000950b00) Data frame received for 3\nI0727 11:29:54.146091    2396 log.go:172] (0xc00063b720) (3) Data frame handling\nI0727 11:29:54.146122    2396 log.go:172] (0xc000950b00) Data frame received for 5\nI0727 11:29:54.146139    2396 log.go:172] (0xc000938000) (5) Data frame handling\nI0727 11:29:54.146155    2396 log.go:172] (0xc000938000) (5) Data frame sent\nI0727 11:29:54.146176    2396 log.go:172] (0xc000950b00) Data frame received for 5\nI0727 11:29:54.146189    2396 log.go:172] (0xc000938000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 31485\nConnection to 172.18.0.13 31485 port [tcp/31485] succeeded!\nI0727 11:29:54.147498    2396 log.go:172] (0xc000950b00) Data frame received for 1\nI0727 11:29:54.147531    2396 log.go:172] (0xc0006cd680) (1) Data frame handling\nI0727 11:29:54.147542    2396 log.go:172] (0xc0006cd680) (1) Data frame sent\nI0727 11:29:54.147554    2396 log.go:172] (0xc000950b00) (0xc0006cd680) Stream removed, broadcasting: 1\nI0727 11:29:54.147582    2396 log.go:172] (0xc000950b00) Go away received\nI0727 11:29:54.147842    2396 log.go:172] (0xc000950b00) (0xc0006cd680) Stream removed, broadcasting: 1\nI0727 11:29:54.147856    2396 log.go:172] (0xc000950b00) (0xc00063b720) Stream removed, broadcasting: 3\nI0727 11:29:54.147862    2396 log.go:172] (0xc000950b00) (0xc000938000) Stream removed, broadcasting: 5\n"
Jul 27 11:29:54.153: INFO: stdout: ""
Jul 27 11:29:54.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-2700 execpodmg6hm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 31485'
Jul 27 11:29:54.356: INFO: stderr: "I0727 11:29:54.281354    2417 log.go:172] (0xc0000e0790) (0xc0007b8140) Create stream\nI0727 11:29:54.281404    2417 log.go:172] (0xc0000e0790) (0xc0007b8140) Stream added, broadcasting: 1\nI0727 11:29:54.286458    2417 log.go:172] (0xc0000e0790) Reply frame received for 1\nI0727 11:29:54.286499    2417 log.go:172] (0xc0000e0790) (0xc0006255e0) Create stream\nI0727 11:29:54.286512    2417 log.go:172] (0xc0000e0790) (0xc0006255e0) Stream added, broadcasting: 3\nI0727 11:29:54.287397    2417 log.go:172] (0xc0000e0790) Reply frame received for 3\nI0727 11:29:54.287447    2417 log.go:172] (0xc0000e0790) (0xc000800000) Create stream\nI0727 11:29:54.287464    2417 log.go:172] (0xc0000e0790) (0xc000800000) Stream added, broadcasting: 5\nI0727 11:29:54.288211    2417 log.go:172] (0xc0000e0790) Reply frame received for 5\nI0727 11:29:54.350033    2417 log.go:172] (0xc0000e0790) Data frame received for 3\nI0727 11:29:54.350081    2417 log.go:172] (0xc0006255e0) (3) Data frame handling\nI0727 11:29:54.350123    2417 log.go:172] (0xc0000e0790) Data frame received for 5\nI0727 11:29:54.350154    2417 log.go:172] (0xc000800000) (5) Data frame handling\nI0727 11:29:54.350170    2417 log.go:172] (0xc000800000) (5) Data frame sent\nI0727 11:29:54.350187    2417 log.go:172] (0xc0000e0790) Data frame received for 5\n+ nc -zv -t -w 2 172.18.0.15 31485\nConnection to 172.18.0.15 31485 port [tcp/31485] succeeded!\nI0727 11:29:54.350203    2417 log.go:172] (0xc000800000) (5) Data frame handling\nI0727 11:29:54.351763    2417 log.go:172] (0xc0000e0790) Data frame received for 1\nI0727 11:29:54.351782    2417 log.go:172] (0xc0007b8140) (1) Data frame handling\nI0727 11:29:54.351796    2417 log.go:172] (0xc0007b8140) (1) Data frame sent\nI0727 11:29:54.351807    2417 log.go:172] (0xc0000e0790) (0xc0007b8140) Stream removed, broadcasting: 1\nI0727 11:29:54.351819    2417 log.go:172] (0xc0000e0790) Go away received\nI0727 11:29:54.352246    2417 log.go:172] (0xc0000e0790) (0xc0007b8140) Stream removed, broadcasting: 1\nI0727 11:29:54.352268    2417 log.go:172] (0xc0000e0790) (0xc0006255e0) Stream removed, broadcasting: 3\nI0727 11:29:54.352277    2417 log.go:172] (0xc0000e0790) (0xc000800000) Stream removed, broadcasting: 5\n"
Jul 27 11:29:54.356: INFO: stdout: ""
Jul 27 11:29:54.356: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:29:54.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2700" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:12.365 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":206,"skipped":3700,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:29:54.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:29:55.301: INFO: The status of Pod test-webserver-e56f364a-7f26-4270-9ca8-3ce14a3088c3 is Pending, waiting for it to be Running (with Ready = true)
Jul 27 11:29:57.310: INFO: The status of Pod test-webserver-e56f364a-7f26-4270-9ca8-3ce14a3088c3 is Pending, waiting for it to be Running (with Ready = true)
Jul 27 11:29:59.316: INFO: The status of Pod test-webserver-e56f364a-7f26-4270-9ca8-3ce14a3088c3 is Running (Ready = false)
Jul 27 11:30:01.305: INFO: The status of Pod test-webserver-e56f364a-7f26-4270-9ca8-3ce14a3088c3 is Running (Ready = false)
Jul 27 11:30:03.305: INFO: The status of Pod test-webserver-e56f364a-7f26-4270-9ca8-3ce14a3088c3 is Running (Ready = false)
Jul 27 11:30:05.305: INFO: The status of Pod test-webserver-e56f364a-7f26-4270-9ca8-3ce14a3088c3 is Running (Ready = false)
Jul 27 11:30:07.305: INFO: The status of Pod test-webserver-e56f364a-7f26-4270-9ca8-3ce14a3088c3 is Running (Ready = false)
Jul 27 11:30:09.318: INFO: The status of Pod test-webserver-e56f364a-7f26-4270-9ca8-3ce14a3088c3 is Running (Ready = false)
Jul 27 11:30:11.305: INFO: The status of Pod test-webserver-e56f364a-7f26-4270-9ca8-3ce14a3088c3 is Running (Ready = false)
Jul 27 11:30:13.304: INFO: The status of Pod test-webserver-e56f364a-7f26-4270-9ca8-3ce14a3088c3 is Running (Ready = false)
Jul 27 11:30:15.306: INFO: The status of Pod test-webserver-e56f364a-7f26-4270-9ca8-3ce14a3088c3 is Running (Ready = false)
Jul 27 11:30:17.306: INFO: The status of Pod test-webserver-e56f364a-7f26-4270-9ca8-3ce14a3088c3 is Running (Ready = false)
Jul 27 11:30:19.499: INFO: The status of Pod test-webserver-e56f364a-7f26-4270-9ca8-3ce14a3088c3 is Running (Ready = false)
Jul 27 11:30:21.306: INFO: The status of Pod test-webserver-e56f364a-7f26-4270-9ca8-3ce14a3088c3 is Running (Ready = true)
Jul 27 11:30:21.309: INFO: Container started at 2020-07-27 11:29:57 +0000 UTC, pod became ready at 2020-07-27 11:30:20 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:30:21.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6167" for this suite.

• [SLOW TEST:26.919 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3718,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:30:21.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 27 11:30:32.143: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:30:32.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7074" for this suite.

• [SLOW TEST:11.114 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3726,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:30:32.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service nodeport-test with type=NodePort in namespace services-2908
STEP: creating replication controller nodeport-test in namespace services-2908
I0727 11:30:32.802571       7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-2908, replica count: 2
I0727 11:30:35.853061       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0727 11:30:38.853312       7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 27 11:30:38.853: INFO: Creating new exec pod
Jul 27 11:30:43.916: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-2908 execpod5dplh -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jul 27 11:30:44.148: INFO: stderr: "I0727 11:30:44.074104    2437 log.go:172] (0xc0009fc160) (0xc000944140) Create stream\nI0727 11:30:44.074196    2437 log.go:172] (0xc0009fc160) (0xc000944140) Stream added, broadcasting: 1\nI0727 11:30:44.079884    2437 log.go:172] (0xc0009fc160) Reply frame received for 1\nI0727 11:30:44.081086    2437 log.go:172] (0xc0009fc160) (0xc000944460) Create stream\nI0727 11:30:44.081130    2437 log.go:172] (0xc0009fc160) (0xc000944460) Stream added, broadcasting: 3\nI0727 11:30:44.082603    2437 log.go:172] (0xc0009fc160) Reply frame received for 3\nI0727 11:30:44.082639    2437 log.go:172] (0xc0009fc160) (0xc0004bc140) Create stream\nI0727 11:30:44.082650    2437 log.go:172] (0xc0009fc160) (0xc0004bc140) Stream added, broadcasting: 5\nI0727 11:30:44.083591    2437 log.go:172] (0xc0009fc160) Reply frame received for 5\nI0727 11:30:44.141884    2437 log.go:172] (0xc0009fc160) Data frame received for 5\nI0727 11:30:44.141921    2437 log.go:172] (0xc0004bc140) (5) Data frame handling\nI0727 11:30:44.141940    2437 log.go:172] (0xc0004bc140) (5) Data frame sent\nI0727 11:30:44.141949    2437 log.go:172] (0xc0009fc160) Data frame received for 5\nI0727 11:30:44.141961    2437 log.go:172] (0xc0004bc140) (5) Data frame handling\n+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0727 11:30:44.142087    2437 log.go:172] (0xc0004bc140) (5) Data frame sent\nI0727 11:30:44.142127    2437 log.go:172] (0xc0009fc160) Data frame received for 5\nI0727 11:30:44.142144    2437 log.go:172] (0xc0004bc140) (5) Data frame handling\nI0727 11:30:44.142174    2437 log.go:172] (0xc0009fc160) Data frame received for 3\nI0727 11:30:44.142197    2437 log.go:172] (0xc000944460) (3) Data frame handling\nI0727 11:30:44.143907    2437 log.go:172] (0xc0009fc160) Data frame received for 1\nI0727 11:30:44.143919    2437 log.go:172] (0xc000944140) (1) Data frame handling\nI0727 11:30:44.143925    2437 log.go:172] (0xc000944140) (1) Data frame sent\nI0727 11:30:44.144057    2437 log.go:172] (0xc0009fc160) (0xc000944140) Stream removed, broadcasting: 1\nI0727 11:30:44.144240    2437 log.go:172] (0xc0009fc160) Go away received\nI0727 11:30:44.144412    2437 log.go:172] (0xc0009fc160) (0xc000944140) Stream removed, broadcasting: 1\nI0727 11:30:44.144429    2437 log.go:172] (0xc0009fc160) (0xc000944460) Stream removed, broadcasting: 3\nI0727 11:30:44.144438    2437 log.go:172] (0xc0009fc160) (0xc0004bc140) Stream removed, broadcasting: 5\n"
Jul 27 11:30:44.149: INFO: stdout: ""
Jul 27 11:30:44.149: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-2908 execpod5dplh -- /bin/sh -x -c nc -zv -t -w 2 10.110.155.15 80'
Jul 27 11:30:44.390: INFO: stderr: "I0727 11:30:44.302355    2457 log.go:172] (0xc000c41340) (0xc00091a8c0) Create stream\nI0727 11:30:44.302415    2457 log.go:172] (0xc000c41340) (0xc00091a8c0) Stream added, broadcasting: 1\nI0727 11:30:44.307319    2457 log.go:172] (0xc000c41340) Reply frame received for 1\nI0727 11:30:44.307354    2457 log.go:172] (0xc000c41340) (0xc0006397c0) Create stream\nI0727 11:30:44.307363    2457 log.go:172] (0xc000c41340) (0xc0006397c0) Stream added, broadcasting: 3\nI0727 11:30:44.308282    2457 log.go:172] (0xc000c41340) Reply frame received for 3\nI0727 11:30:44.308334    2457 log.go:172] (0xc000c41340) (0xc000516be0) Create stream\nI0727 11:30:44.308359    2457 log.go:172] (0xc000c41340) (0xc000516be0) Stream added, broadcasting: 5\nI0727 11:30:44.309582    2457 log.go:172] (0xc000c41340) Reply frame received for 5\nI0727 11:30:44.382090    2457 log.go:172] (0xc000c41340) Data frame received for 3\nI0727 11:30:44.382129    2457 log.go:172] (0xc0006397c0) (3) Data frame handling\nI0727 11:30:44.382156    2457 log.go:172] (0xc000c41340) Data frame received for 5\nI0727 11:30:44.382167    2457 log.go:172] (0xc000516be0) (5) Data frame handling\nI0727 11:30:44.382180    2457 log.go:172] (0xc000516be0) (5) Data frame sent\nI0727 11:30:44.382192    2457 log.go:172] (0xc000c41340) Data frame received for 5\nI0727 11:30:44.382207    2457 log.go:172] (0xc000516be0) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.155.15 80\nConnection to 10.110.155.15 80 port [tcp/http] succeeded!\nI0727 11:30:44.383889    2457 log.go:172] (0xc000c41340) Data frame received for 1\nI0727 11:30:44.383908    2457 log.go:172] (0xc00091a8c0) (1) Data frame handling\nI0727 11:30:44.383920    2457 log.go:172] (0xc00091a8c0) (1) Data frame sent\nI0727 11:30:44.383934    2457 log.go:172] (0xc000c41340) (0xc00091a8c0) Stream removed, broadcasting: 1\nI0727 11:30:44.383949    2457 log.go:172] (0xc000c41340) Go away received\nI0727 11:30:44.384372    2457 log.go:172] (0xc000c41340) (0xc00091a8c0) Stream removed, broadcasting: 1\nI0727 11:30:44.384405    2457 log.go:172] (0xc000c41340) (0xc0006397c0) Stream removed, broadcasting: 3\nI0727 11:30:44.384420    2457 log.go:172] (0xc000c41340) (0xc000516be0) Stream removed, broadcasting: 5\n"
Jul 27 11:30:44.390: INFO: stdout: ""
Jul 27 11:30:44.390: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-2908 execpod5dplh -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 32359'
Jul 27 11:30:44.606: INFO: stderr: "I0727 11:30:44.523602    2478 log.go:172] (0xc000a54210) (0xc000629680) Create stream\nI0727 11:30:44.523667    2478 log.go:172] (0xc000a54210) (0xc000629680) Stream added, broadcasting: 1\nI0727 11:30:44.526220    2478 log.go:172] (0xc000a54210) Reply frame received for 1\nI0727 11:30:44.526256    2478 log.go:172] (0xc000a54210) (0xc000988000) Create stream\nI0727 11:30:44.526265    2478 log.go:172] (0xc000a54210) (0xc000988000) Stream added, broadcasting: 3\nI0727 11:30:44.527146    2478 log.go:172] (0xc000a54210) Reply frame received for 3\nI0727 11:30:44.527182    2478 log.go:172] (0xc000a54210) (0xc0009880a0) Create stream\nI0727 11:30:44.527196    2478 log.go:172] (0xc000a54210) (0xc0009880a0) Stream added, broadcasting: 5\nI0727 11:30:44.527884    2478 log.go:172] (0xc000a54210) Reply frame received for 5\nI0727 11:30:44.596450    2478 log.go:172] (0xc000a54210) Data frame received for 5\nI0727 11:30:44.596499    2478 log.go:172] (0xc0009880a0) (5) Data frame handling\nI0727 11:30:44.596532    2478 log.go:172] (0xc0009880a0) (5) Data frame sent\nI0727 11:30:44.596551    2478 log.go:172] (0xc000a54210) Data frame received for 5\nI0727 11:30:44.596564    2478 log.go:172] (0xc0009880a0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 32359\nConnection to 172.18.0.13 32359 port [tcp/32359] succeeded!\nI0727 11:30:44.596838    2478 log.go:172] (0xc000a54210) Data frame received for 3\nI0727 11:30:44.596879    2478 log.go:172] (0xc000988000) (3) Data frame handling\nI0727 11:30:44.598988    2478 log.go:172] (0xc000a54210) Data frame received for 1\nI0727 11:30:44.599104    2478 log.go:172] (0xc000629680) (1) Data frame handling\nI0727 11:30:44.599188    2478 log.go:172] (0xc000629680) (1) Data frame sent\nI0727 11:30:44.599274    2478 log.go:172] (0xc000a54210) (0xc000629680) Stream removed, broadcasting: 1\nI0727 11:30:44.599313    2478 log.go:172] (0xc000a54210) Go away received\nI0727 11:30:44.599760    2478 log.go:172] (0xc000a54210) (0xc000629680) Stream removed, broadcasting: 1\nI0727 11:30:44.599785    2478 log.go:172] (0xc000a54210) (0xc000988000) Stream removed, broadcasting: 3\nI0727 11:30:44.599799    2478 log.go:172] (0xc000a54210) (0xc0009880a0) Stream removed, broadcasting: 5\n"
Jul 27 11:30:44.606: INFO: stdout: ""
Jul 27 11:30:44.606: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-2908 execpod5dplh -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 32359'
Jul 27 11:30:44.807: INFO: stderr: "I0727 11:30:44.737746    2500 log.go:172] (0xc000abe000) (0xc0006a94a0) Create stream\nI0727 11:30:44.737817    2500 log.go:172] (0xc000abe000) (0xc0006a94a0) Stream added, broadcasting: 1\nI0727 11:30:44.740449    2500 log.go:172] (0xc000abe000) Reply frame received for 1\nI0727 11:30:44.740498    2500 log.go:172] (0xc000abe000) (0xc0006a9540) Create stream\nI0727 11:30:44.740513    2500 log.go:172] (0xc000abe000) (0xc0006a9540) Stream added, broadcasting: 3\nI0727 11:30:44.741707    2500 log.go:172] (0xc000abe000) Reply frame received for 3\nI0727 11:30:44.741746    2500 log.go:172] (0xc000abe000) (0xc00095c000) Create stream\nI0727 11:30:44.741761    2500 log.go:172] (0xc000abe000) (0xc00095c000) Stream added, broadcasting: 5\nI0727 11:30:44.742760    2500 log.go:172] (0xc000abe000) Reply frame received for 5\nI0727 11:30:44.799730    2500 log.go:172] (0xc000abe000) Data frame received for 3\nI0727 11:30:44.799784    2500 log.go:172] (0xc0006a9540) (3) Data frame handling\nI0727 11:30:44.799825    2500 log.go:172] (0xc000abe000) Data frame received for 5\nI0727 11:30:44.799852    2500 log.go:172] (0xc00095c000) (5) Data frame handling\nI0727 11:30:44.799878    2500 log.go:172] (0xc00095c000) (5) Data frame sent\nI0727 11:30:44.799897    2500 log.go:172] (0xc000abe000) Data frame received for 5\nI0727 11:30:44.799912    2500 log.go:172] (0xc00095c000) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 32359\nConnection to 172.18.0.15 32359 port [tcp/32359] succeeded!\nI0727 11:30:44.801647    2500 log.go:172] (0xc000abe000) Data frame received for 1\nI0727 11:30:44.801688    2500 log.go:172] (0xc0006a94a0) (1) Data frame handling\nI0727 11:30:44.801703    2500 log.go:172] (0xc0006a94a0) (1) Data frame sent\nI0727 11:30:44.801726    2500 log.go:172] (0xc000abe000) (0xc0006a94a0) Stream removed, broadcasting: 1\nI0727 11:30:44.801819    2500 log.go:172] (0xc000abe000) Go away received\nI0727 11:30:44.802113    2500 log.go:172] (0xc000abe000) (0xc0006a94a0) Stream removed, broadcasting: 1\nI0727 11:30:44.802134    2500 log.go:172] (0xc000abe000) (0xc0006a9540) Stream removed, broadcasting: 3\nI0727 11:30:44.802147    2500 log.go:172] (0xc000abe000) (0xc00095c000) Stream removed, broadcasting: 5\n"
Jul 27 11:30:44.807: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:30:44.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2908" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:12.387 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":209,"skipped":3773,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:30:44.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on node default medium
Jul 27 11:30:44.875: INFO: Waiting up to 5m0s for pod "pod-feee74c2-3b8d-493e-8220-79a3bb4def9a" in namespace "emptydir-3055" to be "Succeeded or Failed"
Jul 27 11:30:44.879: INFO: Pod "pod-feee74c2-3b8d-493e-8220-79a3bb4def9a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.374195ms
Jul 27 11:30:46.883: INFO: Pod "pod-feee74c2-3b8d-493e-8220-79a3bb4def9a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007146738s
Jul 27 11:30:48.887: INFO: Pod "pod-feee74c2-3b8d-493e-8220-79a3bb4def9a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011795055s
STEP: Saw pod success
Jul 27 11:30:48.887: INFO: Pod "pod-feee74c2-3b8d-493e-8220-79a3bb4def9a" satisfied condition "Succeeded or Failed"
Jul 27 11:30:48.890: INFO: Trying to get logs from node kali-worker pod pod-feee74c2-3b8d-493e-8220-79a3bb4def9a container test-container: 
STEP: delete the pod
Jul 27 11:30:48.940: INFO: Waiting for pod pod-feee74c2-3b8d-493e-8220-79a3bb4def9a to disappear
Jul 27 11:30:48.952: INFO: Pod pod-feee74c2-3b8d-493e-8220-79a3bb4def9a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:30:48.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3055" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":210,"skipped":3798,"failed":0}
SSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:30:48.985: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Jul 27 11:30:49.038: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jul 27 11:30:49.051: INFO: Waiting for terminating namespaces to be deleted...
Jul 27 11:30:49.054: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Jul 27 11:30:49.060: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Jul 27 11:30:49.060: INFO: 	Container kindnet-cni ready: true, restart count 1
Jul 27 11:30:49.060: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Jul 27 11:30:49.060: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 27 11:30:49.060: INFO: nodeport-test-48llf from services-2908 started at 2020-07-27 11:30:33 +0000 UTC (1 container statuses recorded)
Jul 27 11:30:49.060: INFO: 	Container nodeport-test ready: true, restart count 0
Jul 27 11:30:49.060: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Jul 27 11:30:49.067: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 27 11:30:49.067: INFO: 	Container kindnet-cni ready: true, restart count 1
Jul 27 11:30:49.067: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Jul 27 11:30:49.067: INFO: 	Container kube-proxy ready: true, restart count 0
Jul 27 11:30:49.067: INFO: execpod5dplh from services-2908 started at 2020-07-27 11:30:38 +0000 UTC (1 container statuses recorded)
Jul 27 11:30:49.067: INFO: 	Container agnhost-pause ready: true, restart count 0
Jul 27 11:30:49.067: INFO: nodeport-test-8dphw from services-2908 started at 2020-07-27 11:30:32 +0000 UTC (1 container statuses recorded)
Jul 27 11:30:49.067: INFO: 	Container nodeport-test ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-9bf32bba-b33a-4082-8b22-75d9493f6feb 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-9bf32bba-b33a-4082-8b22-75d9493f6feb off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-9bf32bba-b33a-4082-8b22-75d9493f6feb
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:30:59.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8212" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:10.534 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":275,"completed":211,"skipped":3807,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:30:59.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test hostPath mode
Jul 27 11:30:59.937: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-7661" to be "Succeeded or Failed"
Jul 27 11:31:00.080: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 143.101077ms
Jul 27 11:31:02.332: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.394912811s
Jul 27 11:31:04.367: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.430341821s
Jul 27 11:31:06.371: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.433577986s
Jul 27 11:31:08.499: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.562305742s
Jul 27 11:31:10.571: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.634481336s
Jul 27 11:31:12.576: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.639001234s
STEP: Saw pod success
Jul 27 11:31:12.576: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed"
Jul 27 11:31:12.579: INFO: Trying to get logs from node kali-worker2 pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jul 27 11:31:12.873: INFO: Waiting for pod pod-host-path-test to disappear
Jul 27 11:31:12.924: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:31:12.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-7661" for this suite.

• [SLOW TEST:13.413 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":212,"skipped":3820,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:31:12.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-043f452c-f279-4719-af10-e05253c6b86c in namespace container-probe-4897
Jul 27 11:31:17.366: INFO: Started pod busybox-043f452c-f279-4719-af10-e05253c6b86c in namespace container-probe-4897
STEP: checking the pod's current state and verifying that restartCount is present
Jul 27 11:31:17.368: INFO: Initial restart count of pod busybox-043f452c-f279-4719-af10-e05253c6b86c is 0
Jul 27 11:32:11.506: INFO: Restart count of pod container-probe-4897/busybox-043f452c-f279-4719-af10-e05253c6b86c is now 1 (54.138344929s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:32:11.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4897" for this suite.

• [SLOW TEST:58.632 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3825,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:32:11.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:32:11.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-614" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":214,"skipped":3857,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:32:11.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating api versions
Jul 27 11:32:11.705: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config api-versions'
Jul 27 11:32:11.891: INFO: stderr: ""
Jul 27 11:32:11.891: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:32:11.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4056" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":275,"completed":215,"skipped":3859,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:32:11.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jul 27 11:32:11.962: INFO: >>> kubeConfig: /root/.kube/config
Jul 27 11:32:14.975: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:32:25.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9576" for this suite.

• [SLOW TEST:13.300 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":216,"skipped":3885,"failed":0}
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:32:25.202: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:32:25.250: INFO: Creating ReplicaSet my-hostname-basic-ce69f66e-d46f-4d01-8c79-0f81efc8d058
Jul 27 11:32:25.290: INFO: Pod name my-hostname-basic-ce69f66e-d46f-4d01-8c79-0f81efc8d058: Found 0 pods out of 1
Jul 27 11:32:30.294: INFO: Pod name my-hostname-basic-ce69f66e-d46f-4d01-8c79-0f81efc8d058: Found 1 pods out of 1
Jul 27 11:32:30.294: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ce69f66e-d46f-4d01-8c79-0f81efc8d058" is running
Jul 27 11:32:30.296: INFO: Pod "my-hostname-basic-ce69f66e-d46f-4d01-8c79-0f81efc8d058-qvz69" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-27 11:32:25 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-27 11:32:28 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-27 11:32:28 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-07-27 11:32:25 +0000 UTC Reason: Message:}])
Jul 27 11:32:30.296: INFO: Trying to dial the pod
Jul 27 11:32:35.307: INFO: Controller my-hostname-basic-ce69f66e-d46f-4d01-8c79-0f81efc8d058: Got expected result from replica 1 [my-hostname-basic-ce69f66e-d46f-4d01-8c79-0f81efc8d058-qvz69]: "my-hostname-basic-ce69f66e-d46f-4d01-8c79-0f81efc8d058-qvz69", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:32:35.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9743" for this suite.

• [SLOW TEST:10.113 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":217,"skipped":3885,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:32:35.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-projected-9hns
STEP: Creating a pod to test atomic-volume-subpath
Jul 27 11:32:35.425: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-9hns" in namespace "subpath-7232" to be "Succeeded or Failed"
Jul 27 11:32:35.447: INFO: Pod "pod-subpath-test-projected-9hns": Phase="Pending", Reason="", readiness=false. Elapsed: 21.838724ms
Jul 27 11:32:37.525: INFO: Pod "pod-subpath-test-projected-9hns": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099824577s
Jul 27 11:32:39.681: INFO: Pod "pod-subpath-test-projected-9hns": Phase="Running", Reason="", readiness=true. Elapsed: 4.256145578s
Jul 27 11:32:41.685: INFO: Pod "pod-subpath-test-projected-9hns": Phase="Running", Reason="", readiness=true. Elapsed: 6.259999761s
Jul 27 11:32:43.689: INFO: Pod "pod-subpath-test-projected-9hns": Phase="Running", Reason="", readiness=true. Elapsed: 8.264209638s
Jul 27 11:32:45.695: INFO: Pod "pod-subpath-test-projected-9hns": Phase="Running", Reason="", readiness=true. Elapsed: 10.269478607s
Jul 27 11:32:47.699: INFO: Pod "pod-subpath-test-projected-9hns": Phase="Running", Reason="", readiness=true. Elapsed: 12.274023502s
Jul 27 11:32:49.702: INFO: Pod "pod-subpath-test-projected-9hns": Phase="Running", Reason="", readiness=true. Elapsed: 14.277173365s
Jul 27 11:32:51.706: INFO: Pod "pod-subpath-test-projected-9hns": Phase="Running", Reason="", readiness=true. Elapsed: 16.281141198s
Jul 27 11:32:53.710: INFO: Pod "pod-subpath-test-projected-9hns": Phase="Running", Reason="", readiness=true. Elapsed: 18.285132785s
Jul 27 11:32:55.715: INFO: Pod "pod-subpath-test-projected-9hns": Phase="Running", Reason="", readiness=true. Elapsed: 20.289953659s
Jul 27 11:32:57.719: INFO: Pod "pod-subpath-test-projected-9hns": Phase="Running", Reason="", readiness=true. Elapsed: 22.294253693s
Jul 27 11:32:59.727: INFO: Pod "pod-subpath-test-projected-9hns": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.30236973s
STEP: Saw pod success
Jul 27 11:32:59.727: INFO: Pod "pod-subpath-test-projected-9hns" satisfied condition "Succeeded or Failed"
Jul 27 11:32:59.730: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-projected-9hns container test-container-subpath-projected-9hns: 
STEP: delete the pod
Jul 27 11:32:59.846: INFO: Waiting for pod pod-subpath-test-projected-9hns to disappear
Jul 27 11:32:59.944: INFO: Pod pod-subpath-test-projected-9hns no longer exists
STEP: Deleting pod pod-subpath-test-projected-9hns
Jul 27 11:32:59.944: INFO: Deleting pod "pod-subpath-test-projected-9hns" in namespace "subpath-7232"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:32:59.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7232" for this suite.

• [SLOW TEST:24.639 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":218,"skipped":3892,"failed":0}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:32:59.954: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:33:04.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6812" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":219,"skipped":3897,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:33:04.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jul 27 11:33:08.885: INFO: Successfully updated pod "labelsupdate6b35a91c-10e1-4a2f-88e4-ce8b22a37860"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:33:10.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4382" for this suite.

• [SLOW TEST:6.836 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":220,"skipped":3917,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:33:10.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 27 11:33:11.528: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 27 11:33:13.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446391, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446391, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446391, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446391, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 27 11:33:16.598: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:33:16.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-104" for this suite.
STEP: Destroying namespace "webhook-104-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.127 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":221,"skipped":3918,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:33:17.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 27 11:33:17.871: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 27 11:33:19.965: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446398, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446398, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446398, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446397, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 27 11:33:21.968: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446398, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446398, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446398, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446397, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 27 11:33:25.007: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:33:25.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5999" for this suite.
STEP: Destroying namespace "webhook-5999-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.109 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":222,"skipped":3925,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:33:25.162: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:33:25.248: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config version'
Jul 27 11:33:25.707: INFO: stderr: ""
Jul 27 11:33:25.707: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.5\", GitCommit:\"e6503f8d8f769ace2f338794c914a96fc335df0f\", GitTreeState:\"clean\", BuildDate:\"2020-07-09T18:53:46Z\", GoVersion:\"go1.13.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.4\", GitCommit:\"c96aede7b5205121079932896c4ad89bb93260af\", GitTreeState:\"clean\", BuildDate:\"2020-06-20T01:49:49Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:33:25.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6747" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":275,"completed":223,"skipped":3926,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:33:25.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-d0192e51-4aad-4b5e-83b8-ace92a9cc710 in namespace container-probe-9517
Jul 27 11:33:29.795: INFO: Started pod busybox-d0192e51-4aad-4b5e-83b8-ace92a9cc710 in namespace container-probe-9517
STEP: checking the pod's current state and verifying that restartCount is present
Jul 27 11:33:29.798: INFO: Initial restart count of pod busybox-d0192e51-4aad-4b5e-83b8-ace92a9cc710 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:37:30.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9517" for this suite.

• [SLOW TEST:245.141 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3936,"failed":0}
SSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:37:30.858: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jul 27 11:37:39.115: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 27 11:37:39.200: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 27 11:37:41.200: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 27 11:37:41.261: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 27 11:37:43.200: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 27 11:37:43.205: INFO: Pod pod-with-poststart-exec-hook still exists
Jul 27 11:37:45.200: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jul 27 11:37:45.204: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:37:45.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5255" for this suite.

• [SLOW TEST:14.355 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3940,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:37:45.214: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-69815372-9df9-4dad-a887-72427e6e74e4
STEP: Creating a pod to test consume secrets
Jul 27 11:37:45.317: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-98156945-571c-4145-82c9-366fb94b3025" in namespace "projected-8516" to be "Succeeded or Failed"
Jul 27 11:37:45.321: INFO: Pod "pod-projected-secrets-98156945-571c-4145-82c9-366fb94b3025": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038543ms
Jul 27 11:37:47.326: INFO: Pod "pod-projected-secrets-98156945-571c-4145-82c9-366fb94b3025": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008845118s
Jul 27 11:37:49.331: INFO: Pod "pod-projected-secrets-98156945-571c-4145-82c9-366fb94b3025": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014371885s
STEP: Saw pod success
Jul 27 11:37:49.331: INFO: Pod "pod-projected-secrets-98156945-571c-4145-82c9-366fb94b3025" satisfied condition "Succeeded or Failed"
Jul 27 11:37:49.334: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-98156945-571c-4145-82c9-366fb94b3025 container projected-secret-volume-test: 
STEP: delete the pod
Jul 27 11:37:49.373: INFO: Waiting for pod pod-projected-secrets-98156945-571c-4145-82c9-366fb94b3025 to disappear
Jul 27 11:37:49.381: INFO: Pod pod-projected-secrets-98156945-571c-4145-82c9-366fb94b3025 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:37:49.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8516" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":226,"skipped":3946,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:37:49.388: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 27 11:37:50.469: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 27 11:37:52.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446670, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446670, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446670, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446670, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 27 11:37:54.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446670, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446670, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446670, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446670, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 27 11:37:57.549: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:38:07.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6717" for this suite.
STEP: Destroying namespace "webhook-6717-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:18.686 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":227,"skipped":3952,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:38:08.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-cd5f8f97-6cb0-4652-b479-d7af9e7e8100
STEP: Creating a pod to test consume secrets
Jul 27 11:38:08.726: INFO: Waiting up to 5m0s for pod "pod-secrets-525eb6bb-42b1-4724-ac60-feec4611ebce" in namespace "secrets-413" to be "Succeeded or Failed"
Jul 27 11:38:08.979: INFO: Pod "pod-secrets-525eb6bb-42b1-4724-ac60-feec4611ebce": Phase="Pending", Reason="", readiness=false. Elapsed: 252.355073ms
Jul 27 11:38:10.982: INFO: Pod "pod-secrets-525eb6bb-42b1-4724-ac60-feec4611ebce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.255826322s
Jul 27 11:38:12.996: INFO: Pod "pod-secrets-525eb6bb-42b1-4724-ac60-feec4611ebce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.269833055s
Jul 27 11:38:15.000: INFO: Pod "pod-secrets-525eb6bb-42b1-4724-ac60-feec4611ebce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.27380651s
STEP: Saw pod success
Jul 27 11:38:15.000: INFO: Pod "pod-secrets-525eb6bb-42b1-4724-ac60-feec4611ebce" satisfied condition "Succeeded or Failed"
Jul 27 11:38:15.003: INFO: Trying to get logs from node kali-worker pod pod-secrets-525eb6bb-42b1-4724-ac60-feec4611ebce container secret-volume-test: 
STEP: delete the pod
Jul 27 11:38:15.054: INFO: Waiting for pod pod-secrets-525eb6bb-42b1-4724-ac60-feec4611ebce to disappear
Jul 27 11:38:15.069: INFO: Pod pod-secrets-525eb6bb-42b1-4724-ac60-feec4611ebce no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:38:15.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-413" for this suite.
STEP: Destroying namespace "secret-namespace-1805" for this suite.

• [SLOW TEST:7.008 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3953,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:38:15.082: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:38:21.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8380" for this suite.
STEP: Destroying namespace "nsdeletetest-6182" for this suite.
Jul 27 11:38:21.418: INFO: Namespace nsdeletetest-6182 was already deleted
STEP: Destroying namespace "nsdeletetest-9461" for this suite.

• [SLOW TEST:6.339 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":229,"skipped":3957,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:38:21.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jul 27 11:38:21.564: INFO: >>> kubeConfig: /root/.kube/config
Jul 27 11:38:24.518: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:38:34.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1622" for this suite.

• [SLOW TEST:12.876 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":230,"skipped":3967,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:38:34.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 27 11:38:34.454: INFO: Waiting up to 5m0s for pod "downwardapi-volume-135e3d29-a63c-43b8-83dc-662a05269e9c" in namespace "projected-5293" to be "Succeeded or Failed"
Jul 27 11:38:34.459: INFO: Pod "downwardapi-volume-135e3d29-a63c-43b8-83dc-662a05269e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.289131ms
Jul 27 11:38:36.462: INFO: Pod "downwardapi-volume-135e3d29-a63c-43b8-83dc-662a05269e9c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007902321s
Jul 27 11:38:38.466: INFO: Pod "downwardapi-volume-135e3d29-a63c-43b8-83dc-662a05269e9c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011789139s
STEP: Saw pod success
Jul 27 11:38:38.466: INFO: Pod "downwardapi-volume-135e3d29-a63c-43b8-83dc-662a05269e9c" satisfied condition "Succeeded or Failed"
Jul 27 11:38:38.469: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-135e3d29-a63c-43b8-83dc-662a05269e9c container client-container: 
STEP: delete the pod
Jul 27 11:38:38.505: INFO: Waiting for pod downwardapi-volume-135e3d29-a63c-43b8-83dc-662a05269e9c to disappear
Jul 27 11:38:38.513: INFO: Pod downwardapi-volume-135e3d29-a63c-43b8-83dc-662a05269e9c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:38:38.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5293" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":4066,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:38:38.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-4845
STEP: Creating a pod to test atomic-volume-subpath
Jul 27 11:38:38.647: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-4845" in namespace "subpath-3809" to be "Succeeded or Failed"
Jul 27 11:38:38.663: INFO: Pod "pod-subpath-test-configmap-4845": Phase="Pending", Reason="", readiness=false. Elapsed: 16.005735ms
Jul 27 11:38:40.667: INFO: Pod "pod-subpath-test-configmap-4845": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019664674s
Jul 27 11:38:42.671: INFO: Pod "pod-subpath-test-configmap-4845": Phase="Running", Reason="", readiness=true. Elapsed: 4.024074546s
Jul 27 11:38:44.675: INFO: Pod "pod-subpath-test-configmap-4845": Phase="Running", Reason="", readiness=true. Elapsed: 6.028398879s
Jul 27 11:38:46.679: INFO: Pod "pod-subpath-test-configmap-4845": Phase="Running", Reason="", readiness=true. Elapsed: 8.032561548s
Jul 27 11:38:48.684: INFO: Pod "pod-subpath-test-configmap-4845": Phase="Running", Reason="", readiness=true. Elapsed: 10.036916568s
Jul 27 11:38:50.688: INFO: Pod "pod-subpath-test-configmap-4845": Phase="Running", Reason="", readiness=true. Elapsed: 12.040704856s
Jul 27 11:38:52.691: INFO: Pod "pod-subpath-test-configmap-4845": Phase="Running", Reason="", readiness=true. Elapsed: 14.04416044s
Jul 27 11:38:54.695: INFO: Pod "pod-subpath-test-configmap-4845": Phase="Running", Reason="", readiness=true. Elapsed: 16.048585935s
Jul 27 11:38:56.700: INFO: Pod "pod-subpath-test-configmap-4845": Phase="Running", Reason="", readiness=true. Elapsed: 18.053100629s
Jul 27 11:38:58.705: INFO: Pod "pod-subpath-test-configmap-4845": Phase="Running", Reason="", readiness=true. Elapsed: 20.057740761s
Jul 27 11:39:00.709: INFO: Pod "pod-subpath-test-configmap-4845": Phase="Running", Reason="", readiness=true. Elapsed: 22.062512634s
Jul 27 11:39:02.727: INFO: Pod "pod-subpath-test-configmap-4845": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.079892594s
STEP: Saw pod success
Jul 27 11:39:02.727: INFO: Pod "pod-subpath-test-configmap-4845" satisfied condition "Succeeded or Failed"
Jul 27 11:39:02.733: INFO: Trying to get logs from node kali-worker2 pod pod-subpath-test-configmap-4845 container test-container-subpath-configmap-4845: 
STEP: delete the pod
Jul 27 11:39:02.826: INFO: Waiting for pod pod-subpath-test-configmap-4845 to disappear
Jul 27 11:39:02.865: INFO: Pod pod-subpath-test-configmap-4845 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-4845
Jul 27 11:39:02.865: INFO: Deleting pod "pod-subpath-test-configmap-4845" in namespace "subpath-3809"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:39:02.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3809" for this suite.

• [SLOW TEST:24.356 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":232,"skipped":4076,"failed":0}
SSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:39:02.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-953
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jul 27 11:39:02.950: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Jul 27 11:39:03.039: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 27 11:39:05.095: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 27 11:39:07.047: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Jul 27 11:39:09.042: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:39:11.044: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:39:13.044: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:39:15.043: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:39:17.044: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:39:19.043: INFO: The status of Pod netserver-0 is Running (Ready = false)
Jul 27 11:39:21.043: INFO: The status of Pod netserver-0 is Running (Ready = true)
Jul 27 11:39:21.049: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jul 27 11:39:23.053: INFO: The status of Pod netserver-1 is Running (Ready = false)
Jul 27 11:39:25.054: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Jul 27 11:39:31.173: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.231:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-953 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 27 11:39:31.173: INFO: >>> kubeConfig: /root/.kube/config
I0727 11:39:31.205267       7 log.go:172] (0xc0025693f0) (0xc002481b80) Create stream
I0727 11:39:31.205300       7 log.go:172] (0xc0025693f0) (0xc002481b80) Stream added, broadcasting: 1
I0727 11:39:31.207997       7 log.go:172] (0xc0025693f0) Reply frame received for 1
I0727 11:39:31.208031       7 log.go:172] (0xc0025693f0) (0xc001493720) Create stream
I0727 11:39:31.208040       7 log.go:172] (0xc0025693f0) (0xc001493720) Stream added, broadcasting: 3
I0727 11:39:31.208918       7 log.go:172] (0xc0025693f0) Reply frame received for 3
I0727 11:39:31.208951       7 log.go:172] (0xc0025693f0) (0xc001e86dc0) Create stream
I0727 11:39:31.208966       7 log.go:172] (0xc0025693f0) (0xc001e86dc0) Stream added, broadcasting: 5
I0727 11:39:31.209763       7 log.go:172] (0xc0025693f0) Reply frame received for 5
I0727 11:39:31.264352       7 log.go:172] (0xc0025693f0) Data frame received for 5
I0727 11:39:31.264456       7 log.go:172] (0xc001e86dc0) (5) Data frame handling
I0727 11:39:31.264491       7 log.go:172] (0xc0025693f0) Data frame received for 3
I0727 11:39:31.264514       7 log.go:172] (0xc001493720) (3) Data frame handling
I0727 11:39:31.264559       7 log.go:172] (0xc001493720) (3) Data frame sent
I0727 11:39:31.264574       7 log.go:172] (0xc0025693f0) Data frame received for 3
I0727 11:39:31.264586       7 log.go:172] (0xc001493720) (3) Data frame handling
I0727 11:39:31.266350       7 log.go:172] (0xc0025693f0) Data frame received for 1
I0727 11:39:31.266386       7 log.go:172] (0xc002481b80) (1) Data frame handling
I0727 11:39:31.266475       7 log.go:172] (0xc002481b80) (1) Data frame sent
I0727 11:39:31.266540       7 log.go:172] (0xc0025693f0) (0xc002481b80) Stream removed, broadcasting: 1
I0727 11:39:31.266604       7 log.go:172] (0xc0025693f0) Go away received
I0727 11:39:31.266730       7 log.go:172] (0xc0025693f0) (0xc002481b80) Stream removed, broadcasting: 1
I0727 11:39:31.266760       7 log.go:172] (0xc0025693f0) (0xc001493720) Stream removed, broadcasting: 3
I0727 11:39:31.266783       7 log.go:172] (0xc0025693f0) (0xc001e86dc0) Stream removed, broadcasting: 5
Jul 27 11:39:31.266: INFO: Found all expected endpoints: [netserver-0]
Jul 27 11:39:31.270: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.112:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-953 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jul 27 11:39:31.270: INFO: >>> kubeConfig: /root/.kube/config
I0727 11:39:31.302226       7 log.go:172] (0xc002c70420) (0xc001493e00) Create stream
I0727 11:39:31.302263       7 log.go:172] (0xc002c70420) (0xc001493e00) Stream added, broadcasting: 1
I0727 11:39:31.304952       7 log.go:172] (0xc002c70420) Reply frame received for 1
I0727 11:39:31.305008       7 log.go:172] (0xc002c70420) (0xc002481e00) Create stream
I0727 11:39:31.305039       7 log.go:172] (0xc002c70420) (0xc002481e00) Stream added, broadcasting: 3
I0727 11:39:31.306022       7 log.go:172] (0xc002c70420) Reply frame received for 3
I0727 11:39:31.306046       7 log.go:172] (0xc002c70420) (0xc002481f40) Create stream
I0727 11:39:31.306055       7 log.go:172] (0xc002c70420) (0xc002481f40) Stream added, broadcasting: 5
I0727 11:39:31.307048       7 log.go:172] (0xc002c70420) Reply frame received for 5
I0727 11:39:31.389182       7 log.go:172] (0xc002c70420) Data frame received for 3
I0727 11:39:31.389213       7 log.go:172] (0xc002481e00) (3) Data frame handling
I0727 11:39:31.389221       7 log.go:172] (0xc002481e00) (3) Data frame sent
I0727 11:39:31.389226       7 log.go:172] (0xc002c70420) Data frame received for 3
I0727 11:39:31.389233       7 log.go:172] (0xc002481e00) (3) Data frame handling
I0727 11:39:31.389290       7 log.go:172] (0xc002c70420) Data frame received for 5
I0727 11:39:31.389309       7 log.go:172] (0xc002481f40) (5) Data frame handling
I0727 11:39:31.390592       7 log.go:172] (0xc002c70420) Data frame received for 1
I0727 11:39:31.390611       7 log.go:172] (0xc001493e00) (1) Data frame handling
I0727 11:39:31.390630       7 log.go:172] (0xc001493e00) (1) Data frame sent
I0727 11:39:31.390643       7 log.go:172] (0xc002c70420) (0xc001493e00) Stream removed, broadcasting: 1
I0727 11:39:31.390653       7 log.go:172] (0xc002c70420) Go away received
I0727 11:39:31.390779       7 log.go:172] (0xc002c70420) (0xc001493e00) Stream removed, broadcasting: 1
I0727 11:39:31.390797       7 log.go:172] (0xc002c70420) (0xc002481e00) Stream removed, broadcasting: 3
I0727 11:39:31.390805       7 log.go:172] (0xc002c70420) (0xc002481f40) Stream removed, broadcasting: 5
Jul 27 11:39:31.390: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:39:31.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-953" for this suite.

• [SLOW TEST:28.523 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":233,"skipped":4084,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:39:31.399: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-2abc7d78-4b4e-4ec8-83b8-b535113e5145
STEP: Creating a pod to test consume configMaps
Jul 27 11:39:31.590: INFO: Waiting up to 5m0s for pod "pod-configmaps-d373c1c1-b7b0-4e3f-887a-97af50ef022a" in namespace "configmap-3654" to be "Succeeded or Failed"
Jul 27 11:39:31.715: INFO: Pod "pod-configmaps-d373c1c1-b7b0-4e3f-887a-97af50ef022a": Phase="Pending", Reason="", readiness=false. Elapsed: 125.60321ms
Jul 27 11:39:34.087: INFO: Pod "pod-configmaps-d373c1c1-b7b0-4e3f-887a-97af50ef022a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.497324248s
Jul 27 11:39:36.091: INFO: Pod "pod-configmaps-d373c1c1-b7b0-4e3f-887a-97af50ef022a": Phase="Running", Reason="", readiness=true. Elapsed: 4.501221689s
Jul 27 11:39:38.095: INFO: Pod "pod-configmaps-d373c1c1-b7b0-4e3f-887a-97af50ef022a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.505051556s
STEP: Saw pod success
Jul 27 11:39:38.095: INFO: Pod "pod-configmaps-d373c1c1-b7b0-4e3f-887a-97af50ef022a" satisfied condition "Succeeded or Failed"
Jul 27 11:39:38.097: INFO: Trying to get logs from node kali-worker2 pod pod-configmaps-d373c1c1-b7b0-4e3f-887a-97af50ef022a container configmap-volume-test: 
STEP: delete the pod
Jul 27 11:39:38.328: INFO: Waiting for pod pod-configmaps-d373c1c1-b7b0-4e3f-887a-97af50ef022a to disappear
Jul 27 11:39:38.342: INFO: Pod pod-configmaps-d373c1c1-b7b0-4e3f-887a-97af50ef022a no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:39:38.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3654" for this suite.

• [SLOW TEST:6.950 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":4096,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:39:38.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 27 11:39:39.200: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 27 11:39:41.280: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446779, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446779, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446779, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446779, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 27 11:39:44.314: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:39:44.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3632-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:39:45.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9047" for this suite.
STEP: Destroying namespace "webhook-9047-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.359 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":235,"skipped":4100,"failed":0}
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:39:45.708: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-1a22a58d-0c8d-452d-b391-b9779a59817e
STEP: Creating a pod to test consume configMaps
Jul 27 11:39:45.947: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ff1ba3f-f7a6-4e23-b89a-b0e37e1fccdc" in namespace "configmap-7288" to be "Succeeded or Failed"
Jul 27 11:39:46.093: INFO: Pod "pod-configmaps-1ff1ba3f-f7a6-4e23-b89a-b0e37e1fccdc": Phase="Pending", Reason="", readiness=false. Elapsed: 146.062045ms
Jul 27 11:39:48.097: INFO: Pod "pod-configmaps-1ff1ba3f-f7a6-4e23-b89a-b0e37e1fccdc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15002038s
Jul 27 11:39:50.101: INFO: Pod "pod-configmaps-1ff1ba3f-f7a6-4e23-b89a-b0e37e1fccdc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154624351s
STEP: Saw pod success
Jul 27 11:39:50.101: INFO: Pod "pod-configmaps-1ff1ba3f-f7a6-4e23-b89a-b0e37e1fccdc" satisfied condition "Succeeded or Failed"
Jul 27 11:39:50.105: INFO: Trying to get logs from node kali-worker pod pod-configmaps-1ff1ba3f-f7a6-4e23-b89a-b0e37e1fccdc container configmap-volume-test: 
STEP: delete the pod
Jul 27 11:39:50.208: INFO: Waiting for pod pod-configmaps-1ff1ba3f-f7a6-4e23-b89a-b0e37e1fccdc to disappear
Jul 27 11:39:50.235: INFO: Pod pod-configmaps-1ff1ba3f-f7a6-4e23-b89a-b0e37e1fccdc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:39:50.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7288" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":236,"skipped":4101,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:39:50.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 27 11:39:50.943: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 27 11:39:52.955: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446790, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446790, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446791, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446790, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 27 11:39:55.984: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jul 27 11:39:56.007: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:39:56.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5559" for this suite.
STEP: Destroying namespace "webhook-5559-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:5.885 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":237,"skipped":4121,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:39:56.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0727 11:40:06.252073       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jul 27 11:40:06.252: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:40:06.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1670" for this suite.

• [SLOW TEST:10.132 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":238,"skipped":4125,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:40:06.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jul 27 11:40:06.346: INFO: Waiting up to 5m0s for pod "pod-87c56d21-4a41-4e7e-bd36-bb9c9ece8ecd" in namespace "emptydir-7832" to be "Succeeded or Failed"
Jul 27 11:40:06.350: INFO: Pod "pod-87c56d21-4a41-4e7e-bd36-bb9c9ece8ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.60863ms
Jul 27 11:40:08.354: INFO: Pod "pod-87c56d21-4a41-4e7e-bd36-bb9c9ece8ecd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007869957s
Jul 27 11:40:10.358: INFO: Pod "pod-87c56d21-4a41-4e7e-bd36-bb9c9ece8ecd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012282452s
STEP: Saw pod success
Jul 27 11:40:10.358: INFO: Pod "pod-87c56d21-4a41-4e7e-bd36-bb9c9ece8ecd" satisfied condition "Succeeded or Failed"
Jul 27 11:40:10.362: INFO: Trying to get logs from node kali-worker2 pod pod-87c56d21-4a41-4e7e-bd36-bb9c9ece8ecd container test-container: 
STEP: delete the pod
Jul 27 11:40:10.382: INFO: Waiting for pod pod-87c56d21-4a41-4e7e-bd36-bb9c9ece8ecd to disappear
Jul 27 11:40:10.410: INFO: Pod pod-87c56d21-4a41-4e7e-bd36-bb9c9ece8ecd no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:40:10.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7832" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":239,"skipped":4135,"failed":0}
SSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:40:10.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-a0cf88ae-bef6-42dc-9a56-75620522a6f8 in namespace container-probe-469
Jul 27 11:40:14.552: INFO: Started pod liveness-a0cf88ae-bef6-42dc-9a56-75620522a6f8 in namespace container-probe-469
STEP: checking the pod's current state and verifying that restartCount is present
Jul 27 11:40:14.555: INFO: Initial restart count of pod liveness-a0cf88ae-bef6-42dc-9a56-75620522a6f8 is 0
Jul 27 11:40:36.611: INFO: Restart count of pod container-probe-469/liveness-a0cf88ae-bef6-42dc-9a56-75620522a6f8 is now 1 (22.055426351s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:40:36.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-469" for this suite.

• [SLOW TEST:26.229 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4148,"failed":0}
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:40:36.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override all
Jul 27 11:40:36.802: INFO: Waiting up to 5m0s for pod "client-containers-19e7e16a-4d7e-4165-a774-8dce8f1a91b4" in namespace "containers-7736" to be "Succeeded or Failed"
Jul 27 11:40:36.873: INFO: Pod "client-containers-19e7e16a-4d7e-4165-a774-8dce8f1a91b4": Phase="Pending", Reason="", readiness=false. Elapsed: 70.784997ms
Jul 27 11:40:38.876: INFO: Pod "client-containers-19e7e16a-4d7e-4165-a774-8dce8f1a91b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073693124s
Jul 27 11:40:40.880: INFO: Pod "client-containers-19e7e16a-4d7e-4165-a774-8dce8f1a91b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077943771s
STEP: Saw pod success
Jul 27 11:40:40.880: INFO: Pod "client-containers-19e7e16a-4d7e-4165-a774-8dce8f1a91b4" satisfied condition "Succeeded or Failed"
Jul 27 11:40:40.883: INFO: Trying to get logs from node kali-worker2 pod client-containers-19e7e16a-4d7e-4165-a774-8dce8f1a91b4 container test-container: 
STEP: delete the pod
Jul 27 11:40:40.930: INFO: Waiting for pod client-containers-19e7e16a-4d7e-4165-a774-8dce8f1a91b4 to disappear
Jul 27 11:40:41.033: INFO: Pod client-containers-19e7e16a-4d7e-4165-a774-8dce8f1a91b4 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:40:41.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7736" for this suite.
•{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":4153,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:40:41.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Jul 27 11:40:41.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4998'
Jul 27 11:40:44.494: INFO: stderr: ""
Jul 27 11:40:44.494: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jul 27 11:40:44.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4998'
Jul 27 11:40:44.714: INFO: stderr: ""
Jul 27 11:40:44.714: INFO: stdout: "update-demo-nautilus-bnl8q update-demo-nautilus-msmxs "
Jul 27 11:40:44.714: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bnl8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4998'
Jul 27 11:40:44.954: INFO: stderr: ""
Jul 27 11:40:44.954: INFO: stdout: ""
Jul 27 11:40:44.954: INFO: update-demo-nautilus-bnl8q is created but not running
Jul 27 11:40:49.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4998'
Jul 27 11:40:50.051: INFO: stderr: ""
Jul 27 11:40:50.051: INFO: stdout: "update-demo-nautilus-bnl8q update-demo-nautilus-msmxs "
Jul 27 11:40:50.051: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bnl8q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4998'
Jul 27 11:40:50.145: INFO: stderr: ""
Jul 27 11:40:50.145: INFO: stdout: "true"
Jul 27 11:40:50.146: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bnl8q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4998'
Jul 27 11:40:50.248: INFO: stderr: ""
Jul 27 11:40:50.248: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 27 11:40:50.248: INFO: validating pod update-demo-nautilus-bnl8q
Jul 27 11:40:50.597: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 27 11:40:50.597: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 27 11:40:50.597: INFO: update-demo-nautilus-bnl8q is verified up and running
Jul 27 11:40:50.597: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-msmxs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4998'
Jul 27 11:40:50.759: INFO: stderr: ""
Jul 27 11:40:50.759: INFO: stdout: "true"
Jul 27 11:40:50.759: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-msmxs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4998'
Jul 27 11:40:50.996: INFO: stderr: ""
Jul 27 11:40:50.996: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jul 27 11:40:50.996: INFO: validating pod update-demo-nautilus-msmxs
Jul 27 11:40:51.048: INFO: got data: {
  "image": "nautilus.jpg"
}

Jul 27 11:40:51.048: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jul 27 11:40:51.048: INFO: update-demo-nautilus-msmxs is verified up and running
STEP: using delete to clean up resources
Jul 27 11:40:51.048: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4998'
Jul 27 11:40:51.245: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 27 11:40:51.245: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jul 27 11:40:51.245: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4998'
Jul 27 11:40:51.426: INFO: stderr: "No resources found in kubectl-4998 namespace.\n"
Jul 27 11:40:51.426: INFO: stdout: ""
Jul 27 11:40:51.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4998 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jul 27 11:40:51.596: INFO: stderr: ""
Jul 27 11:40:51.597: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:40:51.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4998" for this suite.

• [SLOW TEST:10.604 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":275,"completed":242,"skipped":4171,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:40:51.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-6841
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-6841
I0727 11:40:53.048060       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-6841, replica count: 2
I0727 11:40:56.098589       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0727 11:40:59.098877       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jul 27 11:40:59.098: INFO: Creating new exec pod
Jul 27 11:41:04.117: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-6841 execpodl85gd -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jul 27 11:41:04.345: INFO: stderr: "I0727 11:41:04.261244    2792 log.go:172] (0xc0006c4b00) (0xc0006b8320) Create stream\nI0727 11:41:04.261330    2792 log.go:172] (0xc0006c4b00) (0xc0006b8320) Stream added, broadcasting: 1\nI0727 11:41:04.266623    2792 log.go:172] (0xc0006c4b00) Reply frame received for 1\nI0727 11:41:04.266676    2792 log.go:172] (0xc0006c4b00) (0xc0006d0000) Create stream\nI0727 11:41:04.266696    2792 log.go:172] (0xc0006c4b00) (0xc0006d0000) Stream added, broadcasting: 3\nI0727 11:41:04.268478    2792 log.go:172] (0xc0006c4b00) Reply frame received for 3\nI0727 11:41:04.268546    2792 log.go:172] (0xc0006c4b00) (0xc0006d0140) Create stream\nI0727 11:41:04.268566    2792 log.go:172] (0xc0006c4b00) (0xc0006d0140) Stream added, broadcasting: 5\nI0727 11:41:04.269868    2792 log.go:172] (0xc0006c4b00) Reply frame received for 5\nI0727 11:41:04.338845    2792 log.go:172] (0xc0006c4b00) Data frame received for 5\nI0727 11:41:04.338879    2792 log.go:172] (0xc0006d0140) (5) Data frame handling\nI0727 11:41:04.338888    2792 log.go:172] (0xc0006d0140) (5) Data frame sent\nI0727 11:41:04.338894    2792 log.go:172] (0xc0006c4b00) Data frame received for 5\nI0727 11:41:04.338901    2792 log.go:172] (0xc0006d0140) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0727 11:41:04.338928    2792 log.go:172] (0xc0006c4b00) Data frame received for 3\nI0727 11:41:04.338937    2792 log.go:172] (0xc0006d0000) (3) Data frame handling\nI0727 11:41:04.340589    2792 log.go:172] (0xc0006c4b00) Data frame received for 1\nI0727 11:41:04.340602    2792 log.go:172] (0xc0006b8320) (1) Data frame handling\nI0727 11:41:04.340608    2792 log.go:172] (0xc0006b8320) (1) Data frame sent\nI0727 11:41:04.340617    2792 log.go:172] (0xc0006c4b00) (0xc0006b8320) Stream removed, broadcasting: 1\nI0727 11:41:04.340626    2792 log.go:172] (0xc0006c4b00) Go away received\nI0727 11:41:04.341123    2792 log.go:172] (0xc0006c4b00) (0xc0006b8320) Stream removed, broadcasting: 1\nI0727 11:41:04.341149    2792 log.go:172] (0xc0006c4b00) (0xc0006d0000) Stream removed, broadcasting: 3\nI0727 11:41:04.341157    2792 log.go:172] (0xc0006c4b00) (0xc0006d0140) Stream removed, broadcasting: 5\n"
Jul 27 11:41:04.345: INFO: stdout: ""
Jul 27 11:41:04.345: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-6841 execpodl85gd -- /bin/sh -x -c nc -zv -t -w 2 10.103.214.50 80'
Jul 27 11:41:04.536: INFO: stderr: "I0727 11:41:04.467548    2812 log.go:172] (0xc00003a0b0) (0xc0007be000) Create stream\nI0727 11:41:04.467627    2812 log.go:172] (0xc00003a0b0) (0xc0007be000) Stream added, broadcasting: 1\nI0727 11:41:04.469896    2812 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0727 11:41:04.469939    2812 log.go:172] (0xc00003a0b0) (0xc000812000) Create stream\nI0727 11:41:04.469952    2812 log.go:172] (0xc00003a0b0) (0xc000812000) Stream added, broadcasting: 3\nI0727 11:41:04.470618    2812 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0727 11:41:04.470636    2812 log.go:172] (0xc00003a0b0) (0xc0008120a0) Create stream\nI0727 11:41:04.470643    2812 log.go:172] (0xc00003a0b0) (0xc0008120a0) Stream added, broadcasting: 5\nI0727 11:41:04.471308    2812 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0727 11:41:04.528927    2812 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0727 11:41:04.528968    2812 log.go:172] (0xc000812000) (3) Data frame handling\nI0727 11:41:04.529013    2812 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0727 11:41:04.529030    2812 log.go:172] (0xc0008120a0) (5) Data frame handling\nI0727 11:41:04.529048    2812 log.go:172] (0xc0008120a0) (5) Data frame sent\nI0727 11:41:04.529062    2812 log.go:172] (0xc00003a0b0) Data frame received for 5\n+ nc -zv -t -w 2 10.103.214.50 80\nConnection to 10.103.214.50 80 port [tcp/http] succeeded!\nI0727 11:41:04.529076    2812 log.go:172] (0xc0008120a0) (5) Data frame handling\nI0727 11:41:04.530955    2812 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0727 11:41:04.530978    2812 log.go:172] (0xc0007be000) (1) Data frame handling\nI0727 11:41:04.530994    2812 log.go:172] (0xc0007be000) (1) Data frame sent\nI0727 11:41:04.531011    2812 log.go:172] (0xc00003a0b0) (0xc0007be000) Stream removed, broadcasting: 1\nI0727 11:41:04.531023    2812 log.go:172] (0xc00003a0b0) Go away received\nI0727 11:41:04.531589    2812 log.go:172] (0xc00003a0b0) (0xc0007be000) Stream removed, broadcasting: 1\nI0727 11:41:04.531621    2812 log.go:172] (0xc00003a0b0) (0xc000812000) Stream removed, broadcasting: 3\nI0727 11:41:04.531641    2812 log.go:172] (0xc00003a0b0) (0xc0008120a0) Stream removed, broadcasting: 5\n"
Jul 27 11:41:04.536: INFO: stdout: ""
Jul 27 11:41:04.536: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:41:04.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6841" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:12.965 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":243,"skipped":4234,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:41:04.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 27 11:41:05.196: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 27 11:41:07.206: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446865, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446865, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446865, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446865, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 27 11:41:10.275: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:41:10.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8233" for this suite.
STEP: Destroying namespace "webhook-8233-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.419 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":244,"skipped":4253,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:41:11.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-8947
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-8947
Jul 27 11:41:11.422: INFO: Found 0 stateful pods, waiting for 1
Jul 27 11:41:21.426: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 27 11:41:21.449: INFO: Deleting all statefulset in ns statefulset-8947
Jul 27 11:41:21.468: INFO: Scaling statefulset ss to 0
Jul 27 11:41:41.586: INFO: Waiting for statefulset status.replicas updated to 0
Jul 27 11:41:41.589: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:41:41.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8947" for this suite.

• [SLOW TEST:30.584 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":245,"skipped":4258,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:41:41.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-5259/configmap-test-452eb3fa-96c9-41b2-bd03-e5e38aad74a4
STEP: Creating a pod to test consume configMaps
Jul 27 11:41:41.691: INFO: Waiting up to 5m0s for pod "pod-configmaps-dcd95f11-f3cf-4e18-b1b2-4522fde9d606" in namespace "configmap-5259" to be "Succeeded or Failed"
Jul 27 11:41:41.718: INFO: Pod "pod-configmaps-dcd95f11-f3cf-4e18-b1b2-4522fde9d606": Phase="Pending", Reason="", readiness=false. Elapsed: 26.974912ms
Jul 27 11:41:43.723: INFO: Pod "pod-configmaps-dcd95f11-f3cf-4e18-b1b2-4522fde9d606": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031348037s
Jul 27 11:41:45.729: INFO: Pod "pod-configmaps-dcd95f11-f3cf-4e18-b1b2-4522fde9d606": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037369096s
STEP: Saw pod success
Jul 27 11:41:45.729: INFO: Pod "pod-configmaps-dcd95f11-f3cf-4e18-b1b2-4522fde9d606" satisfied condition "Succeeded or Failed"
Jul 27 11:41:45.732: INFO: Trying to get logs from node kali-worker pod pod-configmaps-dcd95f11-f3cf-4e18-b1b2-4522fde9d606 container env-test: 
STEP: delete the pod
Jul 27 11:41:45.922: INFO: Waiting for pod pod-configmaps-dcd95f11-f3cf-4e18-b1b2-4522fde9d606 to disappear
Jul 27 11:41:45.953: INFO: Pod pod-configmaps-dcd95f11-f3cf-4e18-b1b2-4522fde9d606 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:41:45.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5259" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4291,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:41:45.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jul 27 11:41:47.121: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jul 27 11:41:49.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446907, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446907, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446907, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446907, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 27 11:41:51.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446907, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446907, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446907, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731446907, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jul 27 11:41:54.168: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:41:54.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-419" for this suite.
STEP: Destroying namespace "webhook-419-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.873 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":247,"skipped":4307,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:41:54.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 27 11:41:54.968: INFO: Waiting up to 5m0s for pod "downwardapi-volume-73cdb12f-1bad-4d0c-97d5-90daa49e64cf" in namespace "projected-8105" to be "Succeeded or Failed"
Jul 27 11:41:55.304: INFO: Pod "downwardapi-volume-73cdb12f-1bad-4d0c-97d5-90daa49e64cf": Phase="Pending", Reason="", readiness=false. Elapsed: 335.638729ms
Jul 27 11:41:57.308: INFO: Pod "downwardapi-volume-73cdb12f-1bad-4d0c-97d5-90daa49e64cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.339535392s
Jul 27 11:41:59.316: INFO: Pod "downwardapi-volume-73cdb12f-1bad-4d0c-97d5-90daa49e64cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.347415658s
STEP: Saw pod success
Jul 27 11:41:59.316: INFO: Pod "downwardapi-volume-73cdb12f-1bad-4d0c-97d5-90daa49e64cf" satisfied condition "Succeeded or Failed"
Jul 27 11:41:59.319: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-73cdb12f-1bad-4d0c-97d5-90daa49e64cf container client-container: 
STEP: delete the pod
Jul 27 11:41:59.359: INFO: Waiting for pod downwardapi-volume-73cdb12f-1bad-4d0c-97d5-90daa49e64cf to disappear
Jul 27 11:41:59.366: INFO: Pod downwardapi-volume-73cdb12f-1bad-4d0c-97d5-90daa49e64cf no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:41:59.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8105" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":248,"skipped":4312,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:41:59.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:41:59.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9554" for this suite.
STEP: Destroying namespace "nspatchtest-4d2195cd-f0b8-42bc-89e6-d005215fc6b5-1602" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":249,"skipped":4321,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:41:59.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-076e508b-a895-48c7-97a1-cda56e59a51e
STEP: Creating a pod to test consume configMaps
Jul 27 11:41:59.777: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a139b104-b9bf-40c2-bf68-245f3f48b6de" in namespace "projected-6770" to be "Succeeded or Failed"
Jul 27 11:41:59.830: INFO: Pod "pod-projected-configmaps-a139b104-b9bf-40c2-bf68-245f3f48b6de": Phase="Pending", Reason="", readiness=false. Elapsed: 53.158894ms
Jul 27 11:42:01.834: INFO: Pod "pod-projected-configmaps-a139b104-b9bf-40c2-bf68-245f3f48b6de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057583321s
Jul 27 11:42:03.839: INFO: Pod "pod-projected-configmaps-a139b104-b9bf-40c2-bf68-245f3f48b6de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061778289s
STEP: Saw pod success
Jul 27 11:42:03.839: INFO: Pod "pod-projected-configmaps-a139b104-b9bf-40c2-bf68-245f3f48b6de" satisfied condition "Succeeded or Failed"
Jul 27 11:42:03.841: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-a139b104-b9bf-40c2-bf68-245f3f48b6de container projected-configmap-volume-test: 
STEP: delete the pod
Jul 27 11:42:03.916: INFO: Waiting for pod pod-projected-configmaps-a139b104-b9bf-40c2-bf68-245f3f48b6de to disappear
Jul 27 11:42:03.954: INFO: Pod pod-projected-configmaps-a139b104-b9bf-40c2-bf68-245f3f48b6de no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:42:03.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6770" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":250,"skipped":4332,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:42:03.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jul 27 11:42:06.305: INFO: Pod name wrapped-volume-race-d4273b7b-af85-4ba1-8390-3fa4ab81fc52: Found 0 pods out of 5
Jul 27 11:42:11.314: INFO: Pod name wrapped-volume-race-d4273b7b-af85-4ba1-8390-3fa4ab81fc52: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d4273b7b-af85-4ba1-8390-3fa4ab81fc52 in namespace emptydir-wrapper-7050, will wait for the garbage collector to delete the pods
Jul 27 11:42:25.420: INFO: Deleting ReplicationController wrapped-volume-race-d4273b7b-af85-4ba1-8390-3fa4ab81fc52 took: 8.159022ms
Jul 27 11:42:25.820: INFO: Terminating ReplicationController wrapped-volume-race-d4273b7b-af85-4ba1-8390-3fa4ab81fc52 pods took: 400.310702ms
STEP: Creating RC which spawns configmap-volume pods
Jul 27 11:42:43.595: INFO: Pod name wrapped-volume-race-0e3e5400-d784-431f-87ff-cf8ade59ad26: Found 0 pods out of 5
Jul 27 11:42:48.603: INFO: Pod name wrapped-volume-race-0e3e5400-d784-431f-87ff-cf8ade59ad26: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-0e3e5400-d784-431f-87ff-cf8ade59ad26 in namespace emptydir-wrapper-7050, will wait for the garbage collector to delete the pods
Jul 27 11:43:04.787: INFO: Deleting ReplicationController wrapped-volume-race-0e3e5400-d784-431f-87ff-cf8ade59ad26 took: 22.491774ms
Jul 27 11:43:05.087: INFO: Terminating ReplicationController wrapped-volume-race-0e3e5400-d784-431f-87ff-cf8ade59ad26 pods took: 300.242885ms
STEP: Creating RC which spawns configmap-volume pods
Jul 27 11:43:13.534: INFO: Pod name wrapped-volume-race-b204869f-f605-4562-a54c-61962ca4fdc7: Found 0 pods out of 5
Jul 27 11:43:18.686: INFO: Pod name wrapped-volume-race-b204869f-f605-4562-a54c-61962ca4fdc7: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-b204869f-f605-4562-a54c-61962ca4fdc7 in namespace emptydir-wrapper-7050, will wait for the garbage collector to delete the pods
Jul 27 11:43:34.936: INFO: Deleting ReplicationController wrapped-volume-race-b204869f-f605-4562-a54c-61962ca4fdc7 took: 7.990658ms
Jul 27 11:43:35.336: INFO: Terminating ReplicationController wrapped-volume-race-b204869f-f605-4562-a54c-61962ca4fdc7 pods took: 400.310826ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:43:45.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-7050" for this suite.

• [SLOW TEST:101.654 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":251,"skipped":4337,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:43:45.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 27 11:43:45.793: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1229acd0-5d0b-41f2-8bd8-400d64ab16b2" in namespace "projected-1696" to be "Succeeded or Failed"
Jul 27 11:43:46.311: INFO: Pod "downwardapi-volume-1229acd0-5d0b-41f2-8bd8-400d64ab16b2": Phase="Pending", Reason="", readiness=false. Elapsed: 517.791187ms
Jul 27 11:43:48.316: INFO: Pod "downwardapi-volume-1229acd0-5d0b-41f2-8bd8-400d64ab16b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.522557714s
Jul 27 11:43:50.320: INFO: Pod "downwardapi-volume-1229acd0-5d0b-41f2-8bd8-400d64ab16b2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.526436115s
Jul 27 11:43:52.324: INFO: Pod "downwardapi-volume-1229acd0-5d0b-41f2-8bd8-400d64ab16b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.530669588s
STEP: Saw pod success
Jul 27 11:43:52.324: INFO: Pod "downwardapi-volume-1229acd0-5d0b-41f2-8bd8-400d64ab16b2" satisfied condition "Succeeded or Failed"
Jul 27 11:43:52.348: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-1229acd0-5d0b-41f2-8bd8-400d64ab16b2 container client-container: 
STEP: delete the pod
Jul 27 11:43:52.459: INFO: Waiting for pod downwardapi-volume-1229acd0-5d0b-41f2-8bd8-400d64ab16b2 to disappear
Jul 27 11:43:52.473: INFO: Pod downwardapi-volume-1229acd0-5d0b-41f2-8bd8-400d64ab16b2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:43:52.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1696" for this suite.

• [SLOW TEST:6.865 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4349,"failed":0}
SSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:43:52.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Jul 27 11:43:52.630: INFO: Waiting up to 5m0s for pod "downward-api-c6d11d69-c0ce-4140-aae7-a0291b6b3ee5" in namespace "downward-api-6709" to be "Succeeded or Failed"
Jul 27 11:43:52.652: INFO: Pod "downward-api-c6d11d69-c0ce-4140-aae7-a0291b6b3ee5": Phase="Pending", Reason="", readiness=false. Elapsed: 22.145213ms
Jul 27 11:43:54.655: INFO: Pod "downward-api-c6d11d69-c0ce-4140-aae7-a0291b6b3ee5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025262973s
Jul 27 11:43:56.659: INFO: Pod "downward-api-c6d11d69-c0ce-4140-aae7-a0291b6b3ee5": Phase="Running", Reason="", readiness=true. Elapsed: 4.029067653s
Jul 27 11:43:58.662: INFO: Pod "downward-api-c6d11d69-c0ce-4140-aae7-a0291b6b3ee5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03206751s
STEP: Saw pod success
Jul 27 11:43:58.662: INFO: Pod "downward-api-c6d11d69-c0ce-4140-aae7-a0291b6b3ee5" satisfied condition "Succeeded or Failed"
Jul 27 11:43:58.664: INFO: Trying to get logs from node kali-worker pod downward-api-c6d11d69-c0ce-4140-aae7-a0291b6b3ee5 container dapi-container: 
STEP: delete the pod
Jul 27 11:43:58.693: INFO: Waiting for pod downward-api-c6d11d69-c0ce-4140-aae7-a0291b6b3ee5 to disappear
Jul 27 11:43:58.724: INFO: Pod downward-api-c6d11d69-c0ce-4140-aae7-a0291b6b3ee5 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:43:58.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6709" for this suite.

• [SLOW TEST:6.229 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":253,"skipped":4353,"failed":0}
SS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:43:58.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:43:58.788: INFO: Creating deployment "test-recreate-deployment"
Jul 27 11:43:58.803: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jul 27 11:43:58.883: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jul 27 11:44:00.891: INFO: Waiting deployment "test-recreate-deployment" to complete
Jul 27 11:44:00.893: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731447038, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731447038, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63731447038, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63731447038, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jul 27 11:44:02.898: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jul 27 11:44:02.905: INFO: Updating deployment test-recreate-deployment
Jul 27 11:44:02.906: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jul 27 11:44:03.660: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-6597 /apis/apps/v1/namespaces/deployment-6597/deployments/test-recreate-deployment 16b57b60-3ff3-4d02-8f7b-d9b141e686d2 4570825 2 2020-07-27 11:43:58 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-07-27 11:44:02 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-27 11:44:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003238918  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-27 11:44:03 +0000 UTC,LastTransitionTime:2020-07-27 11:44:03 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-07-27 11:44:03 +0000 UTC,LastTransitionTime:2020-07-27 11:43:58 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Jul 27 11:44:03.679: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-6597 /apis/apps/v1/namespaces/deployment-6597/replicasets/test-recreate-deployment-d5667d9c7 b7b38077-a587-41dd-a66a-91576cdea44c 4570822 1 2020-07-27 11:44:03 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 16b57b60-3ff3-4d02-8f7b-d9b141e686d2 0xc00326ad10 0xc00326ad11}] []  [{kube-controller-manager Update apps/v1 2020-07-27 11:44:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 54 98 53 55 98 54 48 45 51 102 102 51 45 52 100 48 50 45 56 102 55 98 45 100 57 98 49 52 49 101 54 56 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00326ad88  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 27 11:44:03.679: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jul 27 11:44:03.679: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-6597 /apis/apps/v1/namespaces/deployment-6597/replicasets/test-recreate-deployment-74d98b5f7c b4fdd841-2190-4292-b714-e54504a9ce31 4570812 2 2020-07-27 11:43:58 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 16b57b60-3ff3-4d02-8f7b-d9b141e686d2 0xc00326ac17 0xc00326ac18}] []  [{kube-controller-manager Update apps/v1 2020-07-27 11:44:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 49 54 98 53 55 98 54 48 45 51 102 102 51 45 52 100 48 50 45 56 102 55 98 45 100 57 98 49 52 49 101 54 56 54 100 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00326aca8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 27 11:44:03.685: INFO: Pod "test-recreate-deployment-d5667d9c7-dg2gc" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-dg2gc test-recreate-deployment-d5667d9c7- deployment-6597 /api/v1/namespaces/deployment-6597/pods/test-recreate-deployment-d5667d9c7-dg2gc 236b897b-4394-4625-9fb1-c65b4a0a8f32 4570823 0 2020-07-27 11:44:03 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 b7b38077-a587-41dd-a66a-91576cdea44c 0xc00326b240 0xc00326b241}] []  [{kube-controller-manager Update v1 2020-07-27 11:44:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 55 98 51 56 48 55 55 45 97 53 56 55 45 52 49 100 100 45 97 54 54 97 45 57 49 53 55 54 99 100 101 97 52 52 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:44:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-75wjb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-75wjb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-75wjb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:44:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:44:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:44:03 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:44:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-27 11:44:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:44:03.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6597" for this suite.
•{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":254,"skipped":4355,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:44:03.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Jul 27 11:44:04.178: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0140862-af1d-4416-975a-630659163c57" in namespace "projected-2045" to be "Succeeded or Failed"
Jul 27 11:44:04.181: INFO: Pod "downwardapi-volume-e0140862-af1d-4416-975a-630659163c57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.958976ms
Jul 27 11:44:06.200: INFO: Pod "downwardapi-volume-e0140862-af1d-4416-975a-630659163c57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021060582s
Jul 27 11:44:08.366: INFO: Pod "downwardapi-volume-e0140862-af1d-4416-975a-630659163c57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.187595001s
STEP: Saw pod success
Jul 27 11:44:08.366: INFO: Pod "downwardapi-volume-e0140862-af1d-4416-975a-630659163c57" satisfied condition "Succeeded or Failed"
Jul 27 11:44:08.369: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-e0140862-af1d-4416-975a-630659163c57 container client-container: 
STEP: delete the pod
Jul 27 11:44:08.540: INFO: Waiting for pod downwardapi-volume-e0140862-af1d-4416-975a-630659163c57 to disappear
Jul 27 11:44:08.570: INFO: Pod downwardapi-volume-e0140862-af1d-4416-975a-630659163c57 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:44:08.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2045" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4378,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:44:08.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-3498
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating stateful set ss in namespace statefulset-3498
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3498
Jul 27 11:44:08.700: INFO: Found 0 stateful pods, waiting for 1
Jul 27 11:44:18.705: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Jul 27 11:44:18.708: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 27 11:44:19.148: INFO: stderr: "I0727 11:44:19.032487    2827 log.go:172] (0xc0000e8d10) (0xc0006b5680) Create stream\nI0727 11:44:19.032547    2827 log.go:172] (0xc0000e8d10) (0xc0006b5680) Stream added, broadcasting: 1\nI0727 11:44:19.035363    2827 log.go:172] (0xc0000e8d10) Reply frame received for 1\nI0727 11:44:19.035418    2827 log.go:172] (0xc0000e8d10) (0xc0006255e0) Create stream\nI0727 11:44:19.035447    2827 log.go:172] (0xc0000e8d10) (0xc0006255e0) Stream added, broadcasting: 3\nI0727 11:44:19.036521    2827 log.go:172] (0xc0000e8d10) Reply frame received for 3\nI0727 11:44:19.036574    2827 log.go:172] (0xc0000e8d10) (0xc0006b5720) Create stream\nI0727 11:44:19.036599    2827 log.go:172] (0xc0000e8d10) (0xc0006b5720) Stream added, broadcasting: 5\nI0727 11:44:19.037664    2827 log.go:172] (0xc0000e8d10) Reply frame received for 5\nI0727 11:44:19.098995    2827 log.go:172] (0xc0000e8d10) Data frame received for 5\nI0727 11:44:19.099020    2827 log.go:172] (0xc0006b5720) (5) Data frame handling\nI0727 11:44:19.099039    2827 log.go:172] (0xc0006b5720) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0727 11:44:19.141718    2827 log.go:172] (0xc0000e8d10) Data frame received for 3\nI0727 11:44:19.141766    2827 log.go:172] (0xc0006255e0) (3) Data frame handling\nI0727 11:44:19.141802    2827 log.go:172] (0xc0006255e0) (3) Data frame sent\nI0727 11:44:19.141823    2827 log.go:172] (0xc0000e8d10) Data frame received for 3\nI0727 11:44:19.141840    2827 log.go:172] (0xc0006255e0) (3) Data frame handling\nI0727 11:44:19.141876    2827 log.go:172] (0xc0000e8d10) Data frame received for 5\nI0727 11:44:19.141894    2827 log.go:172] (0xc0006b5720) (5) Data frame handling\nI0727 11:44:19.143979    2827 log.go:172] (0xc0000e8d10) Data frame received for 1\nI0727 11:44:19.144056    2827 log.go:172] (0xc0006b5680) (1) Data frame handling\nI0727 11:44:19.144082    2827 log.go:172] (0xc0006b5680) (1) Data frame sent\nI0727 11:44:19.144095    2827 log.go:172] (0xc0000e8d10) (0xc0006b5680) Stream removed, broadcasting: 1\nI0727 11:44:19.144115    2827 log.go:172] (0xc0000e8d10) Go away received\nI0727 11:44:19.144485    2827 log.go:172] (0xc0000e8d10) (0xc0006b5680) Stream removed, broadcasting: 1\nI0727 11:44:19.144511    2827 log.go:172] (0xc0000e8d10) (0xc0006255e0) Stream removed, broadcasting: 3\nI0727 11:44:19.144523    2827 log.go:172] (0xc0000e8d10) (0xc0006b5720) Stream removed, broadcasting: 5\n"
Jul 27 11:44:19.148: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 27 11:44:19.148: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 27 11:44:19.189: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jul 27 11:44:29.194: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 27 11:44:29.194: INFO: Waiting for statefulset status.replicas updated to 0
Jul 27 11:44:29.224: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 27 11:44:29.224: INFO: ss-0  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  }]
Jul 27 11:44:29.224: INFO: 
Jul 27 11:44:29.224: INFO: StatefulSet ss has not reached scale 3, at 1
Jul 27 11:44:30.227: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.980005165s
Jul 27 11:44:31.666: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.977258714s
Jul 27 11:44:32.671: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.538779451s
Jul 27 11:44:33.676: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.533391287s
Jul 27 11:44:34.685: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.528728895s
Jul 27 11:44:35.690: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.51948299s
Jul 27 11:44:36.695: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.513945555s
Jul 27 11:44:37.701: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.509058002s
Jul 27 11:44:38.706: INFO: Verifying statefulset ss doesn't scale past 3 for another 503.518953ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3498
Jul 27 11:44:39.712: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:44:39.932: INFO: stderr: "I0727 11:44:39.837551    2850 log.go:172] (0xc000a5a000) (0xc0009f2000) Create stream\nI0727 11:44:39.837595    2850 log.go:172] (0xc000a5a000) (0xc0009f2000) Stream added, broadcasting: 1\nI0727 11:44:39.839994    2850 log.go:172] (0xc000a5a000) Reply frame received for 1\nI0727 11:44:39.840029    2850 log.go:172] (0xc000a5a000) (0xc0002e0000) Create stream\nI0727 11:44:39.840040    2850 log.go:172] (0xc000a5a000) (0xc0002e0000) Stream added, broadcasting: 3\nI0727 11:44:39.840712    2850 log.go:172] (0xc000a5a000) Reply frame received for 3\nI0727 11:44:39.840843    2850 log.go:172] (0xc000a5a000) (0xc000312000) Create stream\nI0727 11:44:39.840861    2850 log.go:172] (0xc000a5a000) (0xc000312000) Stream added, broadcasting: 5\nI0727 11:44:39.841722    2850 log.go:172] (0xc000a5a000) Reply frame received for 5\nI0727 11:44:39.925481    2850 log.go:172] (0xc000a5a000) Data frame received for 5\nI0727 11:44:39.925516    2850 log.go:172] (0xc000312000) (5) Data frame handling\nI0727 11:44:39.925531    2850 log.go:172] (0xc000312000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0727 11:44:39.925556    2850 log.go:172] (0xc000a5a000) Data frame received for 3\nI0727 11:44:39.925586    2850 log.go:172] (0xc0002e0000) (3) Data frame handling\nI0727 11:44:39.925613    2850 log.go:172] (0xc0002e0000) (3) Data frame sent\nI0727 11:44:39.925628    2850 log.go:172] (0xc000a5a000) Data frame received for 3\nI0727 11:44:39.925640    2850 log.go:172] (0xc0002e0000) (3) Data frame handling\nI0727 11:44:39.925678    2850 log.go:172] (0xc000a5a000) Data frame received for 5\nI0727 11:44:39.925702    2850 log.go:172] (0xc000312000) (5) Data frame handling\nI0727 11:44:39.927062    2850 log.go:172] (0xc000a5a000) Data frame received for 1\nI0727 11:44:39.927072    2850 log.go:172] (0xc0009f2000) (1) Data frame handling\nI0727 11:44:39.927078    2850 log.go:172] (0xc0009f2000) (1) Data frame sent\nI0727 11:44:39.927199    2850 log.go:172] (0xc000a5a000) (0xc0009f2000) Stream removed, broadcasting: 1\nI0727 11:44:39.927245    2850 log.go:172] (0xc000a5a000) Go away received\nI0727 11:44:39.927444    2850 log.go:172] (0xc000a5a000) (0xc0009f2000) Stream removed, broadcasting: 1\nI0727 11:44:39.927454    2850 log.go:172] (0xc000a5a000) (0xc0002e0000) Stream removed, broadcasting: 3\nI0727 11:44:39.927460    2850 log.go:172] (0xc000a5a000) (0xc000312000) Stream removed, broadcasting: 5\n"
Jul 27 11:44:39.933: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 27 11:44:39.933: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 27 11:44:39.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:44:40.132: INFO: stderr: "I0727 11:44:40.054390    2870 log.go:172] (0xc0009b0840) (0xc0007075e0) Create stream\nI0727 11:44:40.054462    2870 log.go:172] (0xc0009b0840) (0xc0007075e0) Stream added, broadcasting: 1\nI0727 11:44:40.057288    2870 log.go:172] (0xc0009b0840) Reply frame received for 1\nI0727 11:44:40.057326    2870 log.go:172] (0xc0009b0840) (0xc000942000) Create stream\nI0727 11:44:40.057338    2870 log.go:172] (0xc0009b0840) (0xc000942000) Stream added, broadcasting: 3\nI0727 11:44:40.058336    2870 log.go:172] (0xc0009b0840) Reply frame received for 3\nI0727 11:44:40.058361    2870 log.go:172] (0xc0009b0840) (0xc000707680) Create stream\nI0727 11:44:40.058369    2870 log.go:172] (0xc0009b0840) (0xc000707680) Stream added, broadcasting: 5\nI0727 11:44:40.059343    2870 log.go:172] (0xc0009b0840) Reply frame received for 5\nI0727 11:44:40.126369    2870 log.go:172] (0xc0009b0840) Data frame received for 3\nI0727 11:44:40.126401    2870 log.go:172] (0xc000942000) (3) Data frame handling\nI0727 11:44:40.126409    2870 log.go:172] (0xc000942000) (3) Data frame sent\nI0727 11:44:40.126414    2870 log.go:172] (0xc0009b0840) Data frame received for 3\nI0727 11:44:40.126418    2870 log.go:172] (0xc000942000) (3) Data frame handling\nI0727 11:44:40.126442    2870 log.go:172] (0xc0009b0840) Data frame received for 5\nI0727 11:44:40.126454    2870 log.go:172] (0xc000707680) (5) Data frame handling\nI0727 11:44:40.126461    2870 log.go:172] (0xc000707680) (5) Data frame sent\nI0727 11:44:40.126467    2870 log.go:172] (0xc0009b0840) Data frame received for 5\nI0727 11:44:40.126471    2870 log.go:172] (0xc000707680) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0727 11:44:40.127559    2870 log.go:172] (0xc0009b0840) Data frame received for 1\nI0727 11:44:40.127581    2870 log.go:172] (0xc0007075e0) (1) Data frame handling\nI0727 11:44:40.127594    2870 log.go:172] (0xc0007075e0) (1) Data frame sent\nI0727 11:44:40.127610    2870 log.go:172] (0xc0009b0840) (0xc0007075e0) Stream removed, broadcasting: 1\nI0727 11:44:40.127631    2870 log.go:172] (0xc0009b0840) Go away received\nI0727 11:44:40.127932    2870 log.go:172] (0xc0009b0840) (0xc0007075e0) Stream removed, broadcasting: 1\nI0727 11:44:40.127956    2870 log.go:172] (0xc0009b0840) (0xc000942000) Stream removed, broadcasting: 3\nI0727 11:44:40.127964    2870 log.go:172] (0xc0009b0840) (0xc000707680) Stream removed, broadcasting: 5\n"
Jul 27 11:44:40.132: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 27 11:44:40.132: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 27 11:44:40.132: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:44:40.359: INFO: stderr: "I0727 11:44:40.280916    2893 log.go:172] (0xc0009ce8f0) (0xc0009403c0) Create stream\nI0727 11:44:40.280990    2893 log.go:172] (0xc0009ce8f0) (0xc0009403c0) Stream added, broadcasting: 1\nI0727 11:44:40.283406    2893 log.go:172] (0xc0009ce8f0) Reply frame received for 1\nI0727 11:44:40.283445    2893 log.go:172] (0xc0009ce8f0) (0xc000940460) Create stream\nI0727 11:44:40.283455    2893 log.go:172] (0xc0009ce8f0) (0xc000940460) Stream added, broadcasting: 3\nI0727 11:44:40.284377    2893 log.go:172] (0xc0009ce8f0) Reply frame received for 3\nI0727 11:44:40.284411    2893 log.go:172] (0xc0009ce8f0) (0xc000940500) Create stream\nI0727 11:44:40.284426    2893 log.go:172] (0xc0009ce8f0) (0xc000940500) Stream added, broadcasting: 5\nI0727 11:44:40.286338    2893 log.go:172] (0xc0009ce8f0) Reply frame received for 5\nI0727 11:44:40.351989    2893 log.go:172] (0xc0009ce8f0) Data frame received for 3\nI0727 11:44:40.352034    2893 log.go:172] (0xc000940460) (3) Data frame handling\nI0727 11:44:40.352052    2893 log.go:172] (0xc000940460) (3) Data frame sent\nI0727 11:44:40.352066    2893 log.go:172] (0xc0009ce8f0) Data frame received for 3\nI0727 11:44:40.352078    2893 log.go:172] (0xc000940460) (3) Data frame handling\nI0727 11:44:40.352121    2893 log.go:172] (0xc0009ce8f0) Data frame received for 5\nI0727 11:44:40.352165    2893 log.go:172] (0xc000940500) (5) Data frame handling\nI0727 11:44:40.352189    2893 log.go:172] (0xc000940500) (5) Data frame sent\nI0727 11:44:40.352204    2893 log.go:172] (0xc0009ce8f0) Data frame received for 5\nI0727 11:44:40.352216    2893 log.go:172] (0xc000940500) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0727 11:44:40.353486    2893 log.go:172] (0xc0009ce8f0) Data frame received for 1\nI0727 11:44:40.353517    2893 log.go:172] (0xc0009403c0) (1) Data frame handling\nI0727 11:44:40.353549    2893 log.go:172] (0xc0009403c0) (1) Data frame sent\nI0727 11:44:40.353578    2893 log.go:172] (0xc0009ce8f0) (0xc0009403c0) Stream removed, broadcasting: 1\nI0727 11:44:40.353604    2893 log.go:172] (0xc0009ce8f0) Go away received\nI0727 11:44:40.353924    2893 log.go:172] (0xc0009ce8f0) (0xc0009403c0) Stream removed, broadcasting: 1\nI0727 11:44:40.353940    2893 log.go:172] (0xc0009ce8f0) (0xc000940460) Stream removed, broadcasting: 3\nI0727 11:44:40.353948    2893 log.go:172] (0xc0009ce8f0) (0xc000940500) Stream removed, broadcasting: 5\n"
Jul 27 11:44:40.359: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jul 27 11:44:40.359: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jul 27 11:44:40.362: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jul 27 11:44:40.362: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jul 27 11:44:40.362: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Jul 27 11:44:40.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 27 11:44:40.569: INFO: stderr: "I0727 11:44:40.498864    2916 log.go:172] (0xc00053bc30) (0xc000701680) Create stream\nI0727 11:44:40.498920    2916 log.go:172] (0xc00053bc30) (0xc000701680) Stream added, broadcasting: 1\nI0727 11:44:40.503620    2916 log.go:172] (0xc00053bc30) Reply frame received for 1\nI0727 11:44:40.503698    2916 log.go:172] (0xc00053bc30) (0xc000978000) Create stream\nI0727 11:44:40.503731    2916 log.go:172] (0xc00053bc30) (0xc000978000) Stream added, broadcasting: 3\nI0727 11:44:40.505110    2916 log.go:172] (0xc00053bc30) Reply frame received for 3\nI0727 11:44:40.505148    2916 log.go:172] (0xc00053bc30) (0xc000514a00) Create stream\nI0727 11:44:40.505162    2916 log.go:172] (0xc00053bc30) (0xc000514a00) Stream added, broadcasting: 5\nI0727 11:44:40.506364    2916 log.go:172] (0xc00053bc30) Reply frame received for 5\nI0727 11:44:40.561219    2916 log.go:172] (0xc00053bc30) Data frame received for 3\nI0727 11:44:40.561246    2916 log.go:172] (0xc000978000) (3) Data frame handling\nI0727 11:44:40.561260    2916 log.go:172] (0xc000978000) (3) Data frame sent\nI0727 11:44:40.561266    2916 log.go:172] (0xc00053bc30) Data frame received for 3\nI0727 11:44:40.561270    2916 log.go:172] (0xc000978000) (3) Data frame handling\nI0727 11:44:40.561334    2916 log.go:172] (0xc00053bc30) Data frame received for 5\nI0727 11:44:40.561357    2916 log.go:172] (0xc000514a00) (5) Data frame handling\nI0727 11:44:40.561379    2916 log.go:172] (0xc000514a00) (5) Data frame sent\nI0727 11:44:40.561391    2916 log.go:172] (0xc00053bc30) Data frame received for 5\nI0727 11:44:40.561407    2916 log.go:172] (0xc000514a00) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0727 11:44:40.563226    2916 log.go:172] (0xc00053bc30) Data frame received for 1\nI0727 11:44:40.563245    2916 log.go:172] (0xc000701680) (1) Data frame handling\nI0727 11:44:40.563254    2916 log.go:172] (0xc000701680) (1) Data frame sent\nI0727 11:44:40.563266    2916 log.go:172] (0xc00053bc30) (0xc000701680) Stream removed, broadcasting: 1\nI0727 11:44:40.563335    2916 log.go:172] (0xc00053bc30) Go away received\nI0727 11:44:40.563598    2916 log.go:172] (0xc00053bc30) (0xc000701680) Stream removed, broadcasting: 1\nI0727 11:44:40.563610    2916 log.go:172] (0xc00053bc30) (0xc000978000) Stream removed, broadcasting: 3\nI0727 11:44:40.563615    2916 log.go:172] (0xc00053bc30) (0xc000514a00) Stream removed, broadcasting: 5\n"
Jul 27 11:44:40.569: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 27 11:44:40.569: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 27 11:44:40.569: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 27 11:44:40.795: INFO: stderr: "I0727 11:44:40.708682    2939 log.go:172] (0xc00099c000) (0xc0009a40a0) Create stream\nI0727 11:44:40.708819    2939 log.go:172] (0xc00099c000) (0xc0009a40a0) Stream added, broadcasting: 1\nI0727 11:44:40.713538    2939 log.go:172] (0xc00099c000) Reply frame received for 1\nI0727 11:44:40.713611    2939 log.go:172] (0xc00099c000) (0xc0005c9720) Create stream\nI0727 11:44:40.713649    2939 log.go:172] (0xc00099c000) (0xc0005c9720) Stream added, broadcasting: 3\nI0727 11:44:40.714690    2939 log.go:172] (0xc00099c000) Reply frame received for 3\nI0727 11:44:40.714727    2939 log.go:172] (0xc00099c000) (0xc000446b40) Create stream\nI0727 11:44:40.714740    2939 log.go:172] (0xc00099c000) (0xc000446b40) Stream added, broadcasting: 5\nI0727 11:44:40.715805    2939 log.go:172] (0xc00099c000) Reply frame received for 5\nI0727 11:44:40.763022    2939 log.go:172] (0xc00099c000) Data frame received for 5\nI0727 11:44:40.763051    2939 log.go:172] (0xc000446b40) (5) Data frame handling\nI0727 11:44:40.763079    2939 log.go:172] (0xc000446b40) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0727 11:44:40.787956    2939 log.go:172] (0xc00099c000) Data frame received for 3\nI0727 11:44:40.787977    2939 log.go:172] (0xc0005c9720) (3) Data frame handling\nI0727 11:44:40.787984    2939 log.go:172] (0xc0005c9720) (3) Data frame sent\nI0727 11:44:40.787989    2939 log.go:172] (0xc00099c000) Data frame received for 3\nI0727 11:44:40.787993    2939 log.go:172] (0xc0005c9720) (3) Data frame handling\nI0727 11:44:40.788239    2939 log.go:172] (0xc00099c000) Data frame received for 5\nI0727 11:44:40.788270    2939 log.go:172] (0xc000446b40) (5) Data frame handling\nI0727 11:44:40.790042    2939 log.go:172] (0xc00099c000) Data frame received for 1\nI0727 11:44:40.790060    2939 log.go:172] (0xc0009a40a0) (1) Data frame handling\nI0727 11:44:40.790073    2939 log.go:172] (0xc0009a40a0) (1) Data frame sent\nI0727 11:44:40.790087    2939 log.go:172] (0xc00099c000) (0xc0009a40a0) Stream removed, broadcasting: 1\nI0727 11:44:40.790161    2939 log.go:172] (0xc00099c000) Go away received\nI0727 11:44:40.790368    2939 log.go:172] (0xc00099c000) (0xc0009a40a0) Stream removed, broadcasting: 1\nI0727 11:44:40.790418    2939 log.go:172] (0xc00099c000) (0xc0005c9720) Stream removed, broadcasting: 3\nI0727 11:44:40.790434    2939 log.go:172] (0xc00099c000) (0xc000446b40) Stream removed, broadcasting: 5\n"
Jul 27 11:44:40.795: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 27 11:44:40.795: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 27 11:44:40.795: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jul 27 11:44:41.030: INFO: stderr: "I0727 11:44:40.918030    2960 log.go:172] (0xc000ae4840) (0xc000adc500) Create stream\nI0727 11:44:40.918077    2960 log.go:172] (0xc000ae4840) (0xc000adc500) Stream added, broadcasting: 1\nI0727 11:44:40.920603    2960 log.go:172] (0xc000ae4840) Reply frame received for 1\nI0727 11:44:40.920667    2960 log.go:172] (0xc000ae4840) (0xc000abe1e0) Create stream\nI0727 11:44:40.920681    2960 log.go:172] (0xc000ae4840) (0xc000abe1e0) Stream added, broadcasting: 3\nI0727 11:44:40.921987    2960 log.go:172] (0xc000ae4840) Reply frame received for 3\nI0727 11:44:40.922016    2960 log.go:172] (0xc000ae4840) (0xc000a7a000) Create stream\nI0727 11:44:40.922024    2960 log.go:172] (0xc000ae4840) (0xc000a7a000) Stream added, broadcasting: 5\nI0727 11:44:40.922979    2960 log.go:172] (0xc000ae4840) Reply frame received for 5\nI0727 11:44:40.984132    2960 log.go:172] (0xc000ae4840) Data frame received for 5\nI0727 11:44:40.984172    2960 log.go:172] (0xc000a7a000) (5) Data frame handling\nI0727 11:44:40.984208    2960 log.go:172] (0xc000a7a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0727 11:44:41.021567    2960 log.go:172] (0xc000ae4840) Data frame received for 3\nI0727 11:44:41.021685    2960 log.go:172] (0xc000abe1e0) (3) Data frame handling\nI0727 11:44:41.021790    2960 log.go:172] (0xc000abe1e0) (3) Data frame sent\nI0727 11:44:41.021906    2960 log.go:172] (0xc000ae4840) Data frame received for 3\nI0727 11:44:41.021946    2960 log.go:172] (0xc000abe1e0) (3) Data frame handling\nI0727 11:44:41.022263    2960 log.go:172] (0xc000ae4840) Data frame received for 5\nI0727 11:44:41.022314    2960 log.go:172] (0xc000a7a000) (5) Data frame handling\nI0727 11:44:41.024669    2960 log.go:172] (0xc000ae4840) Data frame received for 1\nI0727 11:44:41.024687    2960 log.go:172] (0xc000adc500) (1) Data frame handling\nI0727 11:44:41.024701    2960 log.go:172] (0xc000adc500) (1) Data frame sent\nI0727 11:44:41.025331    2960 log.go:172] (0xc000ae4840) (0xc000adc500) Stream removed, broadcasting: 1\nI0727 11:44:41.025715    2960 log.go:172] (0xc000ae4840) (0xc000adc500) Stream removed, broadcasting: 1\nI0727 11:44:41.025742    2960 log.go:172] (0xc000ae4840) (0xc000abe1e0) Stream removed, broadcasting: 3\nI0727 11:44:41.025762    2960 log.go:172] (0xc000ae4840) (0xc000a7a000) Stream removed, broadcasting: 5\n"
Jul 27 11:44:41.030: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jul 27 11:44:41.030: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jul 27 11:44:41.030: INFO: Waiting for statefulset status.replicas updated to 0
Jul 27 11:44:41.033: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Jul 27 11:44:51.040: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jul 27 11:44:51.040: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jul 27 11:44:51.040: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jul 27 11:44:51.051: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 27 11:44:51.051: INFO: ss-0  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  }]
Jul 27 11:44:51.051: INFO: ss-1  kali-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  }]
Jul 27 11:44:51.051: INFO: ss-2  kali-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  }]
Jul 27 11:44:51.051: INFO: 
Jul 27 11:44:51.051: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 27 11:44:52.193: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 27 11:44:52.193: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  }]
Jul 27 11:44:52.193: INFO: ss-1  kali-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  }]
Jul 27 11:44:52.193: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  }]
Jul 27 11:44:52.193: INFO: 
Jul 27 11:44:52.193: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 27 11:44:53.198: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 27 11:44:53.198: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  }]
Jul 27 11:44:53.198: INFO: ss-1  kali-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  }]
Jul 27 11:44:53.198: INFO: ss-2  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  }]
Jul 27 11:44:53.198: INFO: 
Jul 27 11:44:53.198: INFO: StatefulSet ss has not reached scale 0, at 3
Jul 27 11:44:54.216: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 27 11:44:54.216: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  }]
Jul 27 11:44:54.216: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  }]
Jul 27 11:44:54.216: INFO: 
Jul 27 11:44:54.216: INFO: StatefulSet ss has not reached scale 0, at 2
Jul 27 11:44:55.221: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 27 11:44:55.221: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  }]
Jul 27 11:44:55.221: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  }]
Jul 27 11:44:55.221: INFO: 
Jul 27 11:44:55.221: INFO: StatefulSet ss has not reached scale 0, at 2
Jul 27 11:44:56.325: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 27 11:44:56.325: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  }]
Jul 27 11:44:56.325: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  }]
Jul 27 11:44:56.325: INFO: 
Jul 27 11:44:56.325: INFO: StatefulSet ss has not reached scale 0, at 2
Jul 27 11:44:57.329: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 27 11:44:57.329: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  }]
Jul 27 11:44:57.329: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  }]
Jul 27 11:44:57.329: INFO: 
Jul 27 11:44:57.329: INFO: StatefulSet ss has not reached scale 0, at 2
Jul 27 11:44:58.334: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 27 11:44:58.334: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  }]
Jul 27 11:44:58.334: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  }]
Jul 27 11:44:58.334: INFO: 
Jul 27 11:44:58.334: INFO: StatefulSet ss has not reached scale 0, at 2
Jul 27 11:44:59.339: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 27 11:44:59.339: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  }]
Jul 27 11:44:59.339: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  }]
Jul 27 11:44:59.339: INFO: 
Jul 27 11:44:59.339: INFO: StatefulSet ss has not reached scale 0, at 2
Jul 27 11:45:00.379: INFO: POD   NODE          PHASE    GRACE  CONDITIONS
Jul 27 11:45:00.379: INFO: ss-0  kali-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:08 +0000 UTC  }]
Jul 27 11:45:00.379: INFO: ss-2  kali-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:41 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-07-27 11:44:29 +0000 UTC  }]
Jul 27 11:45:00.379: INFO: 
Jul 27 11:45:00.379: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3498
Jul 27 11:45:01.385: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:45:01.526: INFO: rc: 1
Jul 27 11:45:01.526: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Jul 27 11:45:11.526: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:45:11.642: INFO: rc: 1
Jul 27 11:45:11.642: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:45:21.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:45:21.757: INFO: rc: 1
Jul 27 11:45:21.757: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:45:31.757: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:45:31.857: INFO: rc: 1
Jul 27 11:45:31.857: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:45:41.857: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:45:41.949: INFO: rc: 1
Jul 27 11:45:41.950: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:45:51.950: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:45:52.041: INFO: rc: 1
Jul 27 11:45:52.041: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:46:02.041: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:46:02.135: INFO: rc: 1
Jul 27 11:46:02.135: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:46:12.135: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:46:12.230: INFO: rc: 1
Jul 27 11:46:12.230: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:46:22.231: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:46:22.333: INFO: rc: 1
Jul 27 11:46:22.334: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:46:32.334: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:46:32.435: INFO: rc: 1
Jul 27 11:46:32.435: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:46:42.436: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:46:42.548: INFO: rc: 1
Jul 27 11:46:42.548: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:46:52.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:46:52.640: INFO: rc: 1
Jul 27 11:46:52.640: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:47:02.641: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:47:02.749: INFO: rc: 1
Jul 27 11:47:02.749: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:47:12.749: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:47:12.853: INFO: rc: 1
Jul 27 11:47:12.853: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:47:22.854: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:47:22.949: INFO: rc: 1
Jul 27 11:47:22.949: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:47:32.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:47:33.055: INFO: rc: 1
Jul 27 11:47:33.055: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:47:43.055: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:47:43.152: INFO: rc: 1
Jul 27 11:47:43.152: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:47:53.152: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:47:53.256: INFO: rc: 1
Jul 27 11:47:53.256: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:48:03.256: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:48:03.391: INFO: rc: 1
Jul 27 11:48:03.391: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:48:13.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:48:13.493: INFO: rc: 1
Jul 27 11:48:13.493: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:48:23.494: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:48:23.588: INFO: rc: 1
Jul 27 11:48:23.589: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:48:33.589: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:48:33.693: INFO: rc: 1
Jul 27 11:48:33.693: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:48:43.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:48:43.786: INFO: rc: 1
Jul 27 11:48:43.786: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:48:53.786: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:48:53.884: INFO: rc: 1
Jul 27 11:48:53.884: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:49:03.884: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:49:03.979: INFO: rc: 1
Jul 27 11:49:03.979: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:49:13.980: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:49:14.082: INFO: rc: 1
Jul 27 11:49:14.082: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:49:24.082: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:49:24.176: INFO: rc: 1
Jul 27 11:49:24.176: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:49:34.176: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:49:34.283: INFO: rc: 1
Jul 27 11:49:34.283: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:49:44.283: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:49:44.389: INFO: rc: 1
Jul 27 11:49:44.389: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:49:54.389: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:49:54.495: INFO: rc: 1
Jul 27 11:49:54.495: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Jul 27 11:50:04.495: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3498 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jul 27 11:50:04.630: INFO: rc: 1
Jul 27 11:50:04.630: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Jul 27 11:50:04.630: INFO: Scaling statefulset ss to 0
Jul 27 11:50:04.660: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 27 11:50:04.663: INFO: Deleting all statefulset in ns statefulset-3498
Jul 27 11:50:04.666: INFO: Scaling statefulset ss to 0
Jul 27 11:50:04.675: INFO: Waiting for statefulset status.replicas updated to 0
Jul 27 11:50:04.677: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:50:04.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3498" for this suite.

• [SLOW TEST:356.119 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":256,"skipped":4399,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:50:04.701: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-288e1a87-7c8d-45e8-8ae8-8d21434922e3
STEP: Creating a pod to test consume secrets
Jul 27 11:50:04.788: INFO: Waiting up to 5m0s for pod "pod-secrets-d34a3367-6589-49a5-858c-6692bdd5d152" in namespace "secrets-128" to be "Succeeded or Failed"
Jul 27 11:50:04.793: INFO: Pod "pod-secrets-d34a3367-6589-49a5-858c-6692bdd5d152": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285907ms
Jul 27 11:50:06.823: INFO: Pod "pod-secrets-d34a3367-6589-49a5-858c-6692bdd5d152": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034814722s
Jul 27 11:50:08.847: INFO: Pod "pod-secrets-d34a3367-6589-49a5-858c-6692bdd5d152": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.058545239s
STEP: Saw pod success
Jul 27 11:50:08.847: INFO: Pod "pod-secrets-d34a3367-6589-49a5-858c-6692bdd5d152" satisfied condition "Succeeded or Failed"
Jul 27 11:50:08.849: INFO: Trying to get logs from node kali-worker2 pod pod-secrets-d34a3367-6589-49a5-858c-6692bdd5d152 container secret-volume-test: 
STEP: delete the pod
Jul 27 11:50:08.916: INFO: Waiting for pod pod-secrets-d34a3367-6589-49a5-858c-6692bdd5d152 to disappear
Jul 27 11:50:08.931: INFO: Pod pod-secrets-d34a3367-6589-49a5-858c-6692bdd5d152 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:50:08.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-128" for this suite.
•{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":257,"skipped":4439,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:50:08.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
Jul 27 11:50:09.450: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jul 27 11:50:09.450: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4860'
Jul 27 11:50:09.739: INFO: stderr: ""
Jul 27 11:50:09.739: INFO: stdout: "service/agnhost-slave created\n"
Jul 27 11:50:09.739: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jul 27 11:50:09.739: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4860'
Jul 27 11:50:10.092: INFO: stderr: ""
Jul 27 11:50:10.094: INFO: stdout: "service/agnhost-master created\n"
Jul 27 11:50:10.094: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jul 27 11:50:10.094: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4860'
Jul 27 11:50:10.415: INFO: stderr: ""
Jul 27 11:50:10.415: INFO: stdout: "service/frontend created\n"
Jul 27 11:50:10.415: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jul 27 11:50:10.415: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4860'
Jul 27 11:50:10.685: INFO: stderr: ""
Jul 27 11:50:10.685: INFO: stdout: "deployment.apps/frontend created\n"
Jul 27 11:50:10.686: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul 27 11:50:10.686: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4860'
Jul 27 11:50:10.964: INFO: stderr: ""
Jul 27 11:50:10.964: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jul 27 11:50:10.964: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jul 27 11:50:10.964: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4860'
Jul 27 11:50:11.226: INFO: stderr: ""
Jul 27 11:50:11.226: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jul 27 11:50:11.226: INFO: Waiting for all frontend pods to be Running.
Jul 27 11:50:21.276: INFO: Waiting for frontend to serve content.
Jul 27 11:50:21.289: INFO: Trying to add a new entry to the guestbook.
Jul 27 11:50:21.299: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Jul 27 11:50:21.307: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4860'
Jul 27 11:50:21.486: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 27 11:50:21.486: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jul 27 11:50:21.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4860'
Jul 27 11:50:21.664: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 27 11:50:21.664: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jul 27 11:50:21.664: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4860'
Jul 27 11:50:21.858: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 27 11:50:21.858: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul 27 11:50:21.859: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4860'
Jul 27 11:50:21.986: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 27 11:50:21.986: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jul 27 11:50:21.986: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4860'
Jul 27 11:50:22.117: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 27 11:50:22.118: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jul 27 11:50:22.118: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4860'
Jul 27 11:50:22.490: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jul 27 11:50:22.490: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:50:22.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4860" for this suite.

• [SLOW TEST:13.789 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":258,"skipped":4456,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:50:22.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:50:23.318: INFO: (0) /api/v1/nodes/kali-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Jul 27 11:50:24.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:50:39.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1694" for this suite.

• [SLOW TEST:15.547 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":260,"skipped":4554,"failed":0}
SSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:50:39.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jul 27 11:50:43.594: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:50:43.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6320" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":261,"skipped":4566,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:50:43.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jul 27 11:50:43.756: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jul 27 11:50:54.369: INFO: >>> kubeConfig: /root/.kube/config
Jul 27 11:50:57.320: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:51:08.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8671" for this suite.

• [SLOW TEST:24.492 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":262,"skipped":4577,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:51:08.180: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
Jul 27 11:51:08.788: INFO: created pod pod-service-account-defaultsa
Jul 27 11:51:08.788: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jul 27 11:51:08.799: INFO: created pod pod-service-account-mountsa
Jul 27 11:51:08.799: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jul 27 11:51:08.836: INFO: created pod pod-service-account-nomountsa
Jul 27 11:51:08.836: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jul 27 11:51:08.903: INFO: created pod pod-service-account-defaultsa-mountspec
Jul 27 11:51:08.903: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jul 27 11:51:08.927: INFO: created pod pod-service-account-mountsa-mountspec
Jul 27 11:51:08.927: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jul 27 11:51:08.961: INFO: created pod pod-service-account-nomountsa-mountspec
Jul 27 11:51:08.961: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jul 27 11:51:09.001: INFO: created pod pod-service-account-defaultsa-nomountspec
Jul 27 11:51:09.001: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jul 27 11:51:09.061: INFO: created pod pod-service-account-mountsa-nomountspec
Jul 27 11:51:09.061: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jul 27 11:51:09.078: INFO: created pod pod-service-account-nomountsa-nomountspec
Jul 27 11:51:09.078: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:51:09.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4785" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":263,"skipped":4608,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:51:09.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jul 27 11:51:09.407: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:09.451: INFO: Number of nodes with available pods: 0
Jul 27 11:51:09.451: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:51:10.477: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:10.479: INFO: Number of nodes with available pods: 0
Jul 27 11:51:10.479: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:51:11.457: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:11.461: INFO: Number of nodes with available pods: 0
Jul 27 11:51:11.462: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:51:12.456: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:12.460: INFO: Number of nodes with available pods: 0
Jul 27 11:51:12.460: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:51:13.501: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:13.681: INFO: Number of nodes with available pods: 0
Jul 27 11:51:13.681: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:51:14.574: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:14.578: INFO: Number of nodes with available pods: 0
Jul 27 11:51:14.578: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:51:15.495: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:15.764: INFO: Number of nodes with available pods: 0
Jul 27 11:51:15.764: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:51:16.938: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:17.190: INFO: Number of nodes with available pods: 0
Jul 27 11:51:17.190: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:51:17.672: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:17.675: INFO: Number of nodes with available pods: 0
Jul 27 11:51:17.675: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:51:18.657: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:18.981: INFO: Number of nodes with available pods: 0
Jul 27 11:51:18.981: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:51:19.467: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:20.238: INFO: Number of nodes with available pods: 0
Jul 27 11:51:20.238: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:51:20.687: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:20.726: INFO: Number of nodes with available pods: 0
Jul 27 11:51:20.726: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:51:21.693: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:21.790: INFO: Number of nodes with available pods: 2
Jul 27 11:51:21.790: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jul 27 11:51:22.341: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:22.399: INFO: Number of nodes with available pods: 1
Jul 27 11:51:22.399: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:51:23.483: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:23.494: INFO: Number of nodes with available pods: 1
Jul 27 11:51:23.494: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:51:24.420: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:24.573: INFO: Number of nodes with available pods: 1
Jul 27 11:51:24.573: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:51:25.404: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:25.407: INFO: Number of nodes with available pods: 1
Jul 27 11:51:25.407: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:51:26.404: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:26.406: INFO: Number of nodes with available pods: 1
Jul 27 11:51:26.406: INFO: Node kali-worker2 is running more than one daemon pod
Jul 27 11:51:27.429: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:51:27.433: INFO: Number of nodes with available pods: 2
Jul 27 11:51:27.433: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4329, will wait for the garbage collector to delete the pods
Jul 27 11:51:27.521: INFO: Deleting DaemonSet.extensions daemon-set took: 30.038978ms
Jul 27 11:51:27.822: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.278932ms
Jul 27 11:51:33.431: INFO: Number of nodes with available pods: 0
Jul 27 11:51:33.432: INFO: Number of running nodes: 0, number of available pods: 0
Jul 27 11:51:33.434: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4329/daemonsets","resourceVersion":"4572831"},"items":null}

Jul 27 11:51:33.437: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4329/pods","resourceVersion":"4572832"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:51:33.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4329" for this suite.

• [SLOW TEST:24.272 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":264,"skipped":4615,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:51:33.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:51:33.634: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"dff432bf-0782-48c6-b94e-1d923d8c25e9", Controller:(*bool)(0xc003638f02), BlockOwnerDeletion:(*bool)(0xc003638f03)}}
Jul 27 11:51:33.647: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"89a37b5a-354e-462d-b982-0e6febbe0f94", Controller:(*bool)(0xc0035f3b82), BlockOwnerDeletion:(*bool)(0xc0035f3b83)}}
Jul 27 11:51:33.683: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"63169087-a392-4217-9a8a-74aca6de3f66", Controller:(*bool)(0xc0036391c2), BlockOwnerDeletion:(*bool)(0xc0036391c3)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:51:38.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3289" for this suite.

• [SLOW TEST:5.346 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":265,"skipped":4626,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:51:38.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:51:38.915: INFO: Creating deployment "webserver-deployment"
Jul 27 11:51:38.920: INFO: Waiting for observed generation 1
Jul 27 11:51:40.990: INFO: Waiting for all required pods to come up
Jul 27 11:51:40.994: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jul 27 11:51:51.003: INFO: Waiting for deployment "webserver-deployment" to complete
Jul 27 11:51:51.008: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jul 27 11:51:51.016: INFO: Updating deployment webserver-deployment
Jul 27 11:51:51.016: INFO: Waiting for observed generation 2
Jul 27 11:51:53.106: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jul 27 11:51:53.109: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jul 27 11:51:53.112: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jul 27 11:51:53.118: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jul 27 11:51:53.118: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jul 27 11:51:53.120: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jul 27 11:51:53.124: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jul 27 11:51:53.124: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jul 27 11:51:53.131: INFO: Updating deployment webserver-deployment
Jul 27 11:51:53.131: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jul 27 11:51:53.428: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jul 27 11:51:53.901: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Jul 27 11:51:54.580: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-5628 /apis/apps/v1/namespaces/deployment-5628/deployments/webserver-deployment c44cfd02-c037-4673-be03-7364a80fea7c 4573153 3 2020-07-27 11:51:38 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a3ec98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-07-27 11:51:51 +0000 UTC,LastTransitionTime:2020-07-27 11:51:38 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-07-27 11:51:53 +0000 UTC,LastTransitionTime:2020-07-27 11:51:53 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jul 27 11:51:54.714: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4  deployment-5628 /apis/apps/v1/namespaces/deployment-5628/replicasets/webserver-deployment-6676bcd6d4 30331b31-ac21-42ff-a11a-03caf9e4f204 4573190 3 2020-07-27 11:51:51 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment c44cfd02-c037-4673-be03-7364a80fea7c 0xc003a3f2f7 0xc003a3f2f8}] []  [{kube-controller-manager Update apps/v1 2020-07-27 11:51:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 52 52 99 102 100 48 50 45 99 48 51 55 45 52 54 55 51 45 98 101 48 51 45 55 51 54 52 97 56 48 102 101 97 55 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a3f398  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jul 27 11:51:54.714: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jul 27 11:51:54.714: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797  deployment-5628 /apis/apps/v1/namespaces/deployment-5628/replicasets/webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 4573183 3 2020-07-27 11:51:38 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment c44cfd02-c037-4673-be03-7364a80fea7c 0xc003a3f437 0xc003a3f438}] []  [{kube-controller-manager Update apps/v1 2020-07-27 11:51:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 52 52 99 102 100 48 50 45 99 48 51 55 45 52 54 55 51 45 98 101 48 51 45 55 51 54 52 97 56 48 102 101 97 55 99 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a3f4e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jul 27 11:51:54.853: INFO: Pod "webserver-deployment-6676bcd6d4-24fp9" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-24fp9 webserver-deployment-6676bcd6d4- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-6676bcd6d4-24fp9 fbda7c02-27c7-4e93-ac8d-a0d123f7e6bd 4573114 0 2020-07-27 11:51:51 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 30331b31-ac21-42ff-a11a-03caf9e4f204 0xc00332a487 0xc00332a488}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 48 51 51 49 98 51 49 45 97 99 50 49 45 52 50 102 102 45 97 49 49 97 45 48 51 99 97 102 57 101 52 102 50 48 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-27 11:51:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.853: INFO: Pod "webserver-deployment-6676bcd6d4-44qqc" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-44qqc webserver-deployment-6676bcd6d4- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-6676bcd6d4-44qqc 329dd263-0c4f-4baa-8c9e-c3c82b0e81bc 4573122 0 2020-07-27 11:51:51 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 30331b31-ac21-42ff-a11a-03caf9e4f204 0xc00332a6d7 0xc00332a6d8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 48 51 51 49 98 51 49 45 97 99 50 49 45 52 50 102 102 45 97 49 49 97 45 48 51 99 97 102 57 101 52 102 50 48 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-27 11:51:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.854: INFO: Pod "webserver-deployment-6676bcd6d4-55flz" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-55flz webserver-deployment-6676bcd6d4- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-6676bcd6d4-55flz 533b4713-4437-41a7-8f2d-9af32a50f142 4573118 0 2020-07-27 11:51:51 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 30331b31-ac21-42ff-a11a-03caf9e4f204 0xc00332a8c7 0xc00332a8c8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 48 51 51 49 98 51 49 45 97 99 50 49 45 52 50 102 102 45 97 49 49 97 45 48 51 99 97 102 57 101 52 102 50 48 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-27 11:51:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.854: INFO: Pod "webserver-deployment-6676bcd6d4-bctws" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bctws webserver-deployment-6676bcd6d4- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-6676bcd6d4-bctws 969edfee-b799-411f-b4cf-3b6917742407 4573171 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 30331b31-ac21-42ff-a11a-03caf9e4f204 0xc00332aae7 0xc00332aae8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 48 51 51 49 98 51 49 45 97 99 50 49 45 52 50 102 102 45 97 49 49 97 45 48 51 99 97 102 57 101 52 102 50 48 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.854: INFO: Pod "webserver-deployment-6676bcd6d4-h5sx8" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-h5sx8 webserver-deployment-6676bcd6d4- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-6676bcd6d4-h5sx8 ca505fc3-f16c-4881-9b8c-f904626bb3bd 4573099 0 2020-07-27 11:51:51 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 30331b31-ac21-42ff-a11a-03caf9e4f204 0xc00332acb7 0xc00332acb8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 48 51 51 49 98 51 49 45 97 99 50 49 45 52 50 102 102 45 97 49 49 97 45 48 51 99 97 102 57 101 52 102 50 48 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-27 11:51:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.855: INFO: Pod "webserver-deployment-6676bcd6d4-hjdhz" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hjdhz webserver-deployment-6676bcd6d4- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-6676bcd6d4-hjdhz d9afa203-0267-4de2-bc01-c30d894089ee 4573177 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 30331b31-ac21-42ff-a11a-03caf9e4f204 0xc00332af07 0xc00332af08}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 48 51 51 49 98 51 49 45 97 99 50 49 45 52 50 102 102 45 97 49 49 97 45 48 51 99 97 102 57 101 52 102 50 48 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.855: INFO: Pod "webserver-deployment-6676bcd6d4-hn95v" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-hn95v webserver-deployment-6676bcd6d4- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-6676bcd6d4-hn95v 5060cfc4-3f8a-4f49-8cb7-212502ba17d2 4573101 0 2020-07-27 11:51:51 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 30331b31-ac21-42ff-a11a-03caf9e4f204 0xc00332b0a7 0xc00332b0a8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:51 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 48 51 51 49 98 51 49 45 97 99 50 49 45 52 50 102 102 45 97 49 49 97 45 48 51 99 97 102 57 101 52 102 50 48 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:51 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-27 11:51:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.855: INFO: Pod "webserver-deployment-6676bcd6d4-kdxq5" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-kdxq5 webserver-deployment-6676bcd6d4- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-6676bcd6d4-kdxq5 196ee01e-3719-4851-af80-089dbd3dc921 4573176 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 30331b31-ac21-42ff-a11a-03caf9e4f204 0xc00332b2b7 0xc00332b2b8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 48 51 51 49 98 51 49 45 97 99 50 49 45 52 50 102 102 45 97 49 49 97 45 48 51 99 97 102 57 101 52 102 50 48 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.856: INFO: Pod "webserver-deployment-6676bcd6d4-qzdtk" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-qzdtk webserver-deployment-6676bcd6d4- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-6676bcd6d4-qzdtk be24e477-d3b8-4bb8-8744-797986e33f2e 4573175 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 30331b31-ac21-42ff-a11a-03caf9e4f204 0xc00332b517 0xc00332b518}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 48 51 51 49 98 51 49 45 97 99 50 49 45 52 50 102 102 45 97 49 49 97 45 48 51 99 97 102 57 101 52 102 50 48 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.856: INFO: Pod "webserver-deployment-6676bcd6d4-rhd4x" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rhd4x webserver-deployment-6676bcd6d4- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-6676bcd6d4-rhd4x da48f283-0837-405c-96c0-1a331b394ae3 4573169 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 30331b31-ac21-42ff-a11a-03caf9e4f204 0xc00332b687 0xc00332b688}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 48 51 51 49 98 51 49 45 97 99 50 49 45 52 50 102 102 45 97 49 49 97 45 48 51 99 97 102 57 101 52 102 50 48 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.856: INFO: Pod "webserver-deployment-6676bcd6d4-s8xgn" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-s8xgn webserver-deployment-6676bcd6d4- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-6676bcd6d4-s8xgn e0fb9d34-3425-47f8-bd0b-5fc6ab69bcb4 4573174 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 30331b31-ac21-42ff-a11a-03caf9e4f204 0xc00332b887 0xc00332b888}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 48 51 51 49 98 51 49 45 97 99 50 49 45 52 50 102 102 45 97 49 49 97 45 48 51 99 97 102 57 101 52 102 50 48 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.856: INFO: Pod "webserver-deployment-6676bcd6d4-svh9b" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-svh9b webserver-deployment-6676bcd6d4- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-6676bcd6d4-svh9b 6f60aa0e-ab52-4055-a78b-a6c1d3ef3cb5 4573184 0 2020-07-27 11:51:54 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 30331b31-ac21-42ff-a11a-03caf9e4f204 0xc00332baa7 0xc00332baa8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 48 51 51 49 98 51 49 45 97 99 50 49 45 52 50 102 102 45 97 49 49 97 45 48 51 99 97 102 57 101 52 102 50 48 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.857: INFO: Pod "webserver-deployment-6676bcd6d4-vfndf" is not available:
&Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-vfndf webserver-deployment-6676bcd6d4- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-6676bcd6d4-vfndf 152286d4-cefc-40e2-b826-4132b61cccfa 4573146 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 30331b31-ac21-42ff-a11a-03caf9e4f204 0xc00332bcc7 0xc00332bcc8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 51 48 51 51 49 98 51 49 45 97 99 50 49 45 52 50 102 102 45 97 49 49 97 45 48 51 99 97 102 57 101 52 102 50 48 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.857: INFO: Pod "webserver-deployment-84855cf797-46545" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-46545 webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-46545 7f22946b-d5b4-49fc-93b8-ba06dd16dfa0 4573167 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc00332be17 0xc00332be18}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.857: INFO: Pod "webserver-deployment-84855cf797-94pv9" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-94pv9 webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-94pv9 6a44d550-e4bf-4986-a71b-d15b8c2a275b 4573029 0 2020-07-27 11:51:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc0022d8287 0xc0022d8288}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:47 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 54 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.160,StartTime:2020-07-27 11:51:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-27 11:51:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f0211094c07046aeaff49b26b5222cb87409f599564fe836ddb3c091c0b92465,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.160,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.857: INFO: Pod "webserver-deployment-84855cf797-bfwkf" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-bfwkf webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-bfwkf ad4e12ba-4f64-4c4d-9c98-8aaacf800f44 4572991 0 2020-07-27 11:51:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc0022d85e7 0xc0022d85e8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.3,StartTime:2020-07-27 11:51:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-27 11:51:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d7881c22515023b2352ccd44809a0d242369e74c6712146ec1ac008307bab449,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.858: INFO: Pod "webserver-deployment-84855cf797-cbmlr" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-cbmlr webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-cbmlr 08a1e07e-33ac-4ba1-a655-d3ea1ced5b9d 4573025 0 2020-07-27 11:51:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc0022d8c67 0xc0022d8c68}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.4,StartTime:2020-07-27 11:51:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-27 11:51:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://0272c98573145750790c7be7877dcd1203cb202f2fcd766c7f496463c0a375f2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.858: INFO: Pod "webserver-deployment-84855cf797-d6f96" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-d6f96 webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-d6f96 e49d21e3-bdd8-4f1b-bd82-e9e77ffc7bf3 4573155 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc0022d8f67 0xc0022d8f68}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.858: INFO: Pod "webserver-deployment-84855cf797-d792r" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-d792r webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-d792r 3d51dcfb-6957-4e18-bf65-cf5e25a66950 4573170 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc0022d9117 0xc0022d9118}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.858: INFO: Pod "webserver-deployment-84855cf797-dvmlb" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-dvmlb webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-dvmlb 1a78b56c-d89e-4031-b0ee-46c65ec2f518 4573179 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc0022d9287 0xc0022d9288}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.859: INFO: Pod "webserver-deployment-84855cf797-gx7s7" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-gx7s7 webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-gx7s7 6364ac5b-9bf8-481e-8470-c73e36852dca 4573058 0 2020-07-27 11:51:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc0022d9467 0xc0022d9468}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:49 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 54 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.162,StartTime:2020-07-27 11:51:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-27 11:51:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c24705c5195d43a7d0e26d5b861546d70c67f63743498d575502828e06c36cb9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.162,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.859: INFO: Pod "webserver-deployment-84855cf797-lptcq" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-lptcq webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-lptcq e2f75c67-e4e3-4ff4-8caa-34ce13e58f33 4573181 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc0022d96c7 0xc0022d96c8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.859: INFO: Pod "webserver-deployment-84855cf797-m7hrv" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-m7hrv webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-m7hrv 22e12db0-c878-497a-ad50-af30882cda00 4573187 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc0022d98a7 0xc0022d98a8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:54 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:53 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-27 11:51:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.860: INFO: Pod "webserver-deployment-84855cf797-m84qf" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-m84qf webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-m84qf 5b891c23-a4fd-4dfe-b732-661e5134b4b9 4573180 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc0022d9af7 0xc0022d9af8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.860: INFO: Pod "webserver-deployment-84855cf797-mrdm8" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-mrdm8 webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-mrdm8 bb5805af-defb-40c0-a4fe-7f8bf5df5bcd 4573195 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc0022d9d17 0xc0022d9d18}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:54 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-07-27 11:51:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.860: INFO: Pod "webserver-deployment-84855cf797-mvbc2" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-mvbc2 webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-mvbc2 d6e560ac-ff3e-43f7-adf2-2e4121b9474c 4573182 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc0022d9fa7 0xc0022d9fa8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.861: INFO: Pod "webserver-deployment-84855cf797-nx9jq" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-nx9jq webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-nx9jq d85962fe-cfa5-4c8e-bc56-fce7a8a76ec4 4573024 0 2020-07-27 11:51:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc003b9c157 0xc003b9c158}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:46 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 53 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:46 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.159,StartTime:2020-07-27 11:51:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-27 11:51:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://acef818c3d224021206c1d663866bac25ad2527af8a6dca5eef86ac68ef2eb9b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.159,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.861: INFO: Pod "webserver-deployment-84855cf797-shwn8" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-shwn8 webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-shwn8 20135c5d-293c-4d60-8814-6ef1734734a9 4573151 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc003b9c3f7 0xc003b9c3f8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.861: INFO: Pod "webserver-deployment-84855cf797-sskcg" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-sskcg webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-sskcg c7cf0473-442c-4c75-a53f-4194a4a53c79 4572999 0 2020-07-27 11:51:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc003b9c5c7 0xc003b9c5c8}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:44 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 53 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.158,StartTime:2020-07-27 11:51:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-27 11:51:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://8a98dc52edbf0ee8c3eb9fd759f67b0854540d223d0db26b238c77da8976c963,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.158,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.861: INFO: Pod "webserver-deployment-84855cf797-tcp27" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-tcp27 webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-tcp27 5fb41adc-e864-4901-bc52-17b85520e930 4573041 0 2020-07-27 11:51:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc003b9c817 0xc003b9c818}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:48 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.5,StartTime:2020-07-27 11:51:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-27 11:51:48 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://513fe8a5472d6ef816a8a54f570038a06c987de9590a2df31fa9cca13e1a6692,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.862: INFO: Pod "webserver-deployment-84855cf797-vl49j" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-vl49j webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-vl49j 0210a8d5-3a90-4334-8644-8d92860c91a3 4573193 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc003b9ca17 0xc003b9ca18}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:54 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-07-27 11:51:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.862: INFO: Pod "webserver-deployment-84855cf797-w76xl" is available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-w76xl webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-w76xl 0bb1ae96-9244-4172-bc41-c5fcec6c2822 4573063 0 2020-07-27 11:51:39 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc003b9cc97 0xc003b9cc98}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:39 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-07-27 11:51:49 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 49 54 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:39 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.161,StartTime:2020-07-27 11:51:39 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-07-27 11:51:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://336de9e17e62892fa39ee0ba768218b6763426093361246e22652613f9dd65b9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.161,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jul 27 11:51:54.862: INFO: Pod "webserver-deployment-84855cf797-z59zv" is not available:
&Pod{ObjectMeta:{webserver-deployment-84855cf797-z59zv webserver-deployment-84855cf797- deployment-5628 /api/v1/namespaces/deployment-5628/pods/webserver-deployment-84855cf797-z59zv f4af42c2-bb38-4587-9670-aca04cee1718 4573178 0 2020-07-27 11:51:53 +0000 UTC   map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 697d4ede-3cf5-49e7-ab1b-43c26f41aae3 0xc003b9cf67 0xc003b9cf68}] []  [{kube-controller-manager Update v1 2020-07-27 11:51:53 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 54 57 55 100 52 101 100 101 45 51 99 102 53 45 52 57 101 55 45 97 98 49 98 45 52 51 99 50 54 102 52 49 97 97 101 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-9l67z,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-9l67z,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-9l67z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-07-27 11:51:54 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:51:54.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5628" for this suite.

• [SLOW TEST:16.576 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":266,"skipped":4648,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:51:55.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-projected-all-test-volume-7d421c22-6f38-4047-9482-ad0d87338e22
STEP: Creating secret with name secret-projected-all-test-volume-d04feef7-4d6a-4e5d-980e-8901ef87bf6a
STEP: Creating a pod to test Check all projections for projected volume plugin
Jul 27 11:51:57.554: INFO: Waiting up to 5m0s for pod "projected-volume-9247273b-50de-4ec6-ab34-5e8e1258c664" in namespace "projected-7224" to be "Succeeded or Failed"
Jul 27 11:51:57.704: INFO: Pod "projected-volume-9247273b-50de-4ec6-ab34-5e8e1258c664": Phase="Pending", Reason="", readiness=false. Elapsed: 149.694809ms
Jul 27 11:51:59.791: INFO: Pod "projected-volume-9247273b-50de-4ec6-ab34-5e8e1258c664": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236030791s
Jul 27 11:52:02.274: INFO: Pod "projected-volume-9247273b-50de-4ec6-ab34-5e8e1258c664": Phase="Pending", Reason="", readiness=false. Elapsed: 4.719576676s
Jul 27 11:52:04.735: INFO: Pod "projected-volume-9247273b-50de-4ec6-ab34-5e8e1258c664": Phase="Pending", Reason="", readiness=false. Elapsed: 7.180823722s
Jul 27 11:52:07.047: INFO: Pod "projected-volume-9247273b-50de-4ec6-ab34-5e8e1258c664": Phase="Pending", Reason="", readiness=false. Elapsed: 9.492047645s
Jul 27 11:52:09.151: INFO: Pod "projected-volume-9247273b-50de-4ec6-ab34-5e8e1258c664": Phase="Pending", Reason="", readiness=false. Elapsed: 11.596230693s
Jul 27 11:52:11.174: INFO: Pod "projected-volume-9247273b-50de-4ec6-ab34-5e8e1258c664": Phase="Pending", Reason="", readiness=false. Elapsed: 13.6198523s
Jul 27 11:52:13.226: INFO: Pod "projected-volume-9247273b-50de-4ec6-ab34-5e8e1258c664": Phase="Running", Reason="", readiness=true. Elapsed: 15.671804646s
Jul 27 11:52:15.528: INFO: Pod "projected-volume-9247273b-50de-4ec6-ab34-5e8e1258c664": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.973720334s
STEP: Saw pod success
Jul 27 11:52:15.528: INFO: Pod "projected-volume-9247273b-50de-4ec6-ab34-5e8e1258c664" satisfied condition "Succeeded or Failed"
Jul 27 11:52:15.640: INFO: Trying to get logs from node kali-worker2 pod projected-volume-9247273b-50de-4ec6-ab34-5e8e1258c664 container projected-all-volume-test: 
STEP: delete the pod
Jul 27 11:52:15.958: INFO: Waiting for pod projected-volume-9247273b-50de-4ec6-ab34-5e8e1258c664 to disappear
Jul 27 11:52:16.010: INFO: Pod projected-volume-9247273b-50de-4ec6-ab34-5e8e1258c664 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:52:16.010: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7224" for this suite.

• [SLOW TEST:21.073 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4666,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:52:16.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Jul 27 11:52:24.950: INFO: Successfully updated pod "annotationupdate51f13706-8dec-41df-870f-f227762f9488"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:52:27.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7455" for this suite.

• [SLOW TEST:10.600 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4674,"failed":0}
S
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:52:27.068: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:52:27.295: INFO: Create a RollingUpdate DaemonSet
Jul 27 11:52:27.363: INFO: Check that daemon pods launch on every node of the cluster
Jul 27 11:52:27.373: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:52:27.406: INFO: Number of nodes with available pods: 0
Jul 27 11:52:27.406: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:52:28.411: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:52:28.415: INFO: Number of nodes with available pods: 0
Jul 27 11:52:28.415: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:52:29.412: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:52:29.416: INFO: Number of nodes with available pods: 0
Jul 27 11:52:29.416: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:52:30.410: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:52:30.477: INFO: Number of nodes with available pods: 0
Jul 27 11:52:30.477: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:52:31.448: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:52:31.452: INFO: Number of nodes with available pods: 1
Jul 27 11:52:31.452: INFO: Node kali-worker is running more than one daemon pod
Jul 27 11:52:32.410: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:52:32.413: INFO: Number of nodes with available pods: 2
Jul 27 11:52:32.413: INFO: Number of running nodes: 2, number of available pods: 2
Jul 27 11:52:32.413: INFO: Update the DaemonSet to trigger a rollout
Jul 27 11:52:32.420: INFO: Updating DaemonSet daemon-set
Jul 27 11:52:44.447: INFO: Roll back the DaemonSet before rollout is complete
Jul 27 11:52:44.526: INFO: Updating DaemonSet daemon-set
Jul 27 11:52:44.526: INFO: Make sure DaemonSet rollback is complete
Jul 27 11:52:44.529: INFO: Wrong image for pod: daemon-set-5gbgw. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul 27 11:52:44.529: INFO: Pod daemon-set-5gbgw is not available
Jul 27 11:52:44.586: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:52:45.753: INFO: Wrong image for pod: daemon-set-5gbgw. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul 27 11:52:45.753: INFO: Pod daemon-set-5gbgw is not available
Jul 27 11:52:45.758: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:52:46.590: INFO: Wrong image for pod: daemon-set-5gbgw. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jul 27 11:52:46.590: INFO: Pod daemon-set-5gbgw is not available
Jul 27 11:52:46.593: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Jul 27 11:52:47.591: INFO: Pod daemon-set-wrqkk is not available
Jul 27 11:52:47.596: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1774, will wait for the garbage collector to delete the pods
Jul 27 11:52:47.663: INFO: Deleting DaemonSet.extensions daemon-set took: 6.896191ms
Jul 27 11:52:47.963: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.276956ms
Jul 27 11:52:51.489: INFO: Number of nodes with available pods: 0
Jul 27 11:52:51.489: INFO: Number of running nodes: 0, number of available pods: 0
Jul 27 11:52:51.491: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1774/daemonsets","resourceVersion":"4573782"},"items":null}

Jul 27 11:52:51.493: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1774/pods","resourceVersion":"4573782"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:52:51.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1774" for this suite.

• [SLOW TEST:24.441 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":269,"skipped":4675,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:52:51.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-8e4e4d55-42c0-45d0-84ea-323ef909328b
STEP: Creating a pod to test consume configMaps
Jul 27 11:52:51.610: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-58ae578c-0916-423d-b62c-4efdd0cc5271" in namespace "projected-8404" to be "Succeeded or Failed"
Jul 27 11:52:51.667: INFO: Pod "pod-projected-configmaps-58ae578c-0916-423d-b62c-4efdd0cc5271": Phase="Pending", Reason="", readiness=false. Elapsed: 56.421043ms
Jul 27 11:52:53.672: INFO: Pod "pod-projected-configmaps-58ae578c-0916-423d-b62c-4efdd0cc5271": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061554343s
Jul 27 11:52:55.678: INFO: Pod "pod-projected-configmaps-58ae578c-0916-423d-b62c-4efdd0cc5271": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.067471509s
STEP: Saw pod success
Jul 27 11:52:55.678: INFO: Pod "pod-projected-configmaps-58ae578c-0916-423d-b62c-4efdd0cc5271" satisfied condition "Succeeded or Failed"
Jul 27 11:52:55.681: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-58ae578c-0916-423d-b62c-4efdd0cc5271 container projected-configmap-volume-test: 
STEP: delete the pod
Jul 27 11:52:55.716: INFO: Waiting for pod pod-projected-configmaps-58ae578c-0916-423d-b62c-4efdd0cc5271 to disappear
Jul 27 11:52:55.764: INFO: Pod pod-projected-configmaps-58ae578c-0916-423d-b62c-4efdd0cc5271 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:52:55.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8404" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":270,"skipped":4692,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:52:55.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Jul 27 11:52:55.919: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:52:57.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5910" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":271,"skipped":4698,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:52:57.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-64b969bd-a5a6-47c9-a5c3-d1aeb81173e3 in namespace container-probe-8428
Jul 27 11:53:01.299: INFO: Started pod test-webserver-64b969bd-a5a6-47c9-a5c3-d1aeb81173e3 in namespace container-probe-8428
STEP: checking the pod's current state and verifying that restartCount is present
Jul 27 11:53:01.302: INFO: Initial restart count of pod test-webserver-64b969bd-a5a6-47c9-a5c3-d1aeb81173e3 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:57:02.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8428" for this suite.

• [SLOW TEST:245.289 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4702,"failed":0}
SS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:57:02.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-a8c275a6-3ff7-4f6a-ba39-1a352faba31a
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:57:02.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8434" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":273,"skipped":4704,"failed":0}
SSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:57:02.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-49d0172c-ae7f-4de0-a383-59d2d224555a
STEP: Creating secret with name s-test-opt-upd-8e60c253-b9a4-42e3-ab66-d811cb03601e
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-49d0172c-ae7f-4de0-a383-59d2d224555a
STEP: Updating secret s-test-opt-upd-8e60c253-b9a4-42e3-ab66-d811cb03601e
STEP: Creating secret with name s-test-opt-create-4f0bc18b-89b9-4c30-82cb-c45c8dbb9779
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:58:33.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-835" for this suite.

• [SLOW TEST:90.993 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4710,"failed":0}
SSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Jul 27 11:58:33.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-2119
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-2119
STEP: Creating statefulset with conflicting port in namespace statefulset-2119
STEP: Waiting until pod test-pod will start running in namespace statefulset-2119
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2119
Jul 27 11:58:40.168: INFO: Observed stateful pod in namespace: statefulset-2119, name: ss-0, uid: db506d1e-a73e-4302-8a77-438191127e76, status phase: Pending. Waiting for statefulset controller to delete.
Jul 27 11:58:40.341: INFO: Observed stateful pod in namespace: statefulset-2119, name: ss-0, uid: db506d1e-a73e-4302-8a77-438191127e76, status phase: Failed. Waiting for statefulset controller to delete.
Jul 27 11:58:40.784: INFO: Observed stateful pod in namespace: statefulset-2119, name: ss-0, uid: db506d1e-a73e-4302-8a77-438191127e76, status phase: Failed. Waiting for statefulset controller to delete.
Jul 27 11:58:41.035: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2119
STEP: Removing pod with conflicting port in namespace statefulset-2119
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2119 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Jul 27 11:58:45.666: INFO: Deleting all statefulset in ns statefulset-2119
Jul 27 11:58:45.669: INFO: Scaling statefulset ss to 0
Jul 27 11:58:55.740: INFO: Waiting for statefulset status.replicas updated to 0
Jul 27 11:58:55.743: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Jul 27 11:58:55.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2119" for this suite.

• [SLOW TEST:21.868 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":275,"skipped":4715,"failed":0}
SSJul 27 11:58:55.770: INFO: Running AfterSuite actions on all nodes
Jul 27 11:58:55.770: INFO: Running AfterSuite actions on node 1
Jul 27 11:58:55.770: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 5238.365 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS