I0813 18:10:26.305187 7 test_context.go:423] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0813 18:10:26.480059 7 e2e.go:124] Starting e2e run "18d1537d-cb2b-4adb-9610-a9f1e74c6290" on Ginkgo node 1 {"msg":"Test Suite starting","total":275,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1597342225 - Will randomize all specs Will run 275 of 4992 specs Aug 13 18:10:26.547: INFO: >>> kubeConfig: /root/.kube/config Aug 13 18:10:26.551: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 13 18:10:26.730: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 13 18:10:26.757: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 13 18:10:26.757: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 13 18:10:26.757: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 13 18:10:26.763: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 13 18:10:26.763: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 13 18:10:26.763: INFO: e2e test version: v1.18.5 Aug 13 18:10:26.764: INFO: kube-apiserver version: v1.18.4 Aug 13 18:10:26.764: INFO: >>> kubeConfig: /root/.kube/config Aug 13 18:10:26.769: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:10:26.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap Aug 13 18:10:28.131: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-66b08ff9-c052-49e9-b96d-2084f1a1e608 STEP: Creating a pod to test consume configMaps Aug 13 18:10:28.203: INFO: Waiting up to 5m0s for pod "pod-configmaps-0f1a585f-975f-48e9-a0fb-7cf79f93b091" in namespace "configmap-7401" to be "Succeeded or Failed" Aug 13 18:10:28.207: INFO: Pod "pod-configmaps-0f1a585f-975f-48e9-a0fb-7cf79f93b091": Phase="Pending", Reason="", readiness=false. Elapsed: 4.205045ms Aug 13 18:10:30.278: INFO: Pod "pod-configmaps-0f1a585f-975f-48e9-a0fb-7cf79f93b091": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07533849s Aug 13 18:10:32.283: INFO: Pod "pod-configmaps-0f1a585f-975f-48e9-a0fb-7cf79f93b091": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080063854s Aug 13 18:10:34.362: INFO: Pod "pod-configmaps-0f1a585f-975f-48e9-a0fb-7cf79f93b091": Phase="Running", Reason="", readiness=true. Elapsed: 6.159092748s Aug 13 18:10:36.530: INFO: Pod "pod-configmaps-0f1a585f-975f-48e9-a0fb-7cf79f93b091": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.326799857s STEP: Saw pod success Aug 13 18:10:36.530: INFO: Pod "pod-configmaps-0f1a585f-975f-48e9-a0fb-7cf79f93b091" satisfied condition "Succeeded or Failed" Aug 13 18:10:36.532: INFO: Trying to get logs from node kali-worker pod pod-configmaps-0f1a585f-975f-48e9-a0fb-7cf79f93b091 container configmap-volume-test: STEP: delete the pod Aug 13 18:10:37.293: INFO: Waiting for pod pod-configmaps-0f1a585f-975f-48e9-a0fb-7cf79f93b091 to disappear Aug 13 18:10:37.328: INFO: Pod pod-configmaps-0f1a585f-975f-48e9-a0fb-7cf79f93b091 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:10:37.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7401" for this suite. • [SLOW TEST:10.719 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":1,"skipped":67,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:10:37.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:10:54.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5463" for this suite. • [SLOW TEST:17.507 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":275,"completed":2,"skipped":76,"failed":0} SSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:10:54.997: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Performing setup for networking test in namespace pod-network-test-7804 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 13 18:10:55.561: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable Aug 13 18:10:56.279: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 13 18:10:58.283: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 13 18:11:00.602: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true) Aug 13 18:11:02.283: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 13 18:11:04.283: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 13 18:11:06.284: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 13 18:11:08.284: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 13 18:11:10.282: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 13 18:11:12.283: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 13 18:11:14.286: INFO: The status of Pod netserver-0 is Running (Ready = false) Aug 13 18:11:16.283: INFO: The status of Pod netserver-0 is Running (Ready = true) Aug 13 18:11:16.290: INFO: The status of Pod netserver-1 is Running (Ready = false) Aug 13 18:11:18.393: INFO: The status of Pod netserver-1 is Running (Ready = true) STEP: Creating test pods Aug 13 18:11:24.805: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.210 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7804 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 13 18:11:24.805: INFO: >>> kubeConfig: /root/.kube/config I0813 18:11:24.833626 7 log.go:172] (0xc00131f290) (0xc002471040) Create stream I0813 18:11:24.833663 7 log.go:172] (0xc00131f290) (0xc002471040) Stream added, broadcasting: 1 I0813 18:11:24.835277 7 log.go:172] (0xc00131f290) Reply frame received for 1 I0813 18:11:24.835310 7 log.go:172] (0xc00131f290) (0xc002038000) Create stream I0813 18:11:24.835320 7 log.go:172] (0xc00131f290) (0xc002038000) Stream added, broadcasting: 3 I0813 18:11:24.836236 7 log.go:172] (0xc00131f290) Reply frame received for 3 I0813 18:11:24.836264 7 log.go:172] (0xc00131f290) (0xc0024710e0) Create stream I0813 18:11:24.836279 7 log.go:172] (0xc00131f290) (0xc0024710e0) Stream added, broadcasting: 5 I0813 18:11:24.837178 7 log.go:172] (0xc00131f290) Reply frame received for 5 I0813 18:11:25.932320 7 log.go:172] (0xc00131f290) Data frame received for 5 I0813 18:11:25.932355 7 log.go:172] (0xc0024710e0) (5) Data frame handling I0813 18:11:25.932388 7 log.go:172] (0xc00131f290) Data frame received for 3 I0813 18:11:25.932403 7 log.go:172] (0xc002038000) (3) Data frame handling I0813 18:11:25.932488 7 log.go:172] (0xc002038000) (3) Data frame sent I0813 18:11:25.932503 7 log.go:172] (0xc00131f290) Data frame received for 3 I0813 18:11:25.932511 7 log.go:172] (0xc002038000) (3) Data frame handling I0813 18:11:25.934504 7 log.go:172] (0xc00131f290) Data frame received for 1 I0813 18:11:25.934518 7 log.go:172] (0xc002471040) (1) Data frame handling I0813 18:11:25.934533 7 log.go:172] (0xc002471040) (1) Data frame sent I0813 18:11:25.934552 7 log.go:172] (0xc00131f290) (0xc002471040) Stream removed, broadcasting: 1 I0813 18:11:25.934592 7 log.go:172] (0xc00131f290) Go away received I0813 18:11:25.934827 7 log.go:172] (0xc00131f290) (0xc002471040) Stream removed, broadcasting: 1 I0813 18:11:25.934838 7 log.go:172] (0xc00131f290) (0xc002038000) Stream removed, broadcasting: 3 I0813 18:11:25.934844 7 log.go:172] (0xc00131f290) (0xc0024710e0) Stream removed, broadcasting: 5 Aug 13 18:11:25.934: INFO: Found all expected endpoints: [netserver-0] Aug 13 18:11:26.022: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.22 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7804 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 13 18:11:26.022: INFO: >>> kubeConfig: /root/.kube/config I0813 18:11:26.075080 7 log.go:172] (0xc00131f8c0) (0xc002471540) Create stream I0813 18:11:26.075130 7 log.go:172] (0xc00131f8c0) (0xc002471540) Stream added, broadcasting: 1 I0813 18:11:26.077086 7 log.go:172] (0xc00131f8c0) Reply frame received for 1 I0813 18:11:26.077129 7 log.go:172] (0xc00131f8c0) (0xc0024021e0) Create stream I0813 18:11:26.077146 7 log.go:172] (0xc00131f8c0) (0xc0024021e0) Stream added, broadcasting: 3 I0813 18:11:26.078069 7 log.go:172] (0xc00131f8c0) Reply frame received for 3 I0813 18:11:26.078105 7 log.go:172] (0xc00131f8c0) (0xc001faa000) Create stream I0813 18:11:26.078118 7 log.go:172] (0xc00131f8c0) (0xc001faa000) Stream added, broadcasting: 5 I0813 18:11:26.078851 7 log.go:172] (0xc00131f8c0) Reply frame received for 5 I0813 18:11:27.147179 7 log.go:172] (0xc00131f8c0) Data frame received for 3 I0813 18:11:27.147204 7 log.go:172] (0xc0024021e0) (3) Data frame handling I0813 18:11:27.147219 7 log.go:172] (0xc0024021e0) (3) Data frame sent I0813 18:11:27.147891 7 log.go:172] (0xc00131f8c0) Data frame received for 5 I0813 18:11:27.147917 7 log.go:172] (0xc001faa000) (5) Data frame handling I0813 18:11:27.147979 7 log.go:172] (0xc00131f8c0) Data frame received for 3 I0813 18:11:27.148026 7 log.go:172] (0xc0024021e0) (3) Data frame handling I0813 18:11:27.150141 7 log.go:172] (0xc00131f8c0) Data frame received for 1 I0813 18:11:27.150174 7 log.go:172] (0xc002471540) (1) Data frame handling I0813 18:11:27.150198 7 log.go:172] (0xc002471540) (1) Data frame sent I0813 18:11:27.150216 7 log.go:172] (0xc00131f8c0) (0xc002471540) Stream removed, broadcasting: 1 I0813 18:11:27.150236 7 log.go:172] (0xc00131f8c0) Go away received I0813 18:11:27.150371 7 log.go:172] (0xc00131f8c0) (0xc002471540) Stream removed, broadcasting: 1 I0813 18:11:27.150393 7 log.go:172] (0xc00131f8c0) (0xc0024021e0) Stream removed, broadcasting: 3 I0813 18:11:27.150407 7 log.go:172] (0xc00131f8c0) (0xc001faa000) Stream removed, broadcasting: 5 Aug 13 18:11:27.150: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:11:27.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7804" for this suite. • [SLOW TEST:32.298 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":3,"skipped":82,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:11:27.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:11:27.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-4093" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":275,"completed":4,"skipped":90,"failed":0} SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:11:27.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:11:27.735: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Aug 13 18:11:27.783: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:27.803: INFO: Number of nodes with available pods: 0 Aug 13 18:11:27.803: INFO: Node kali-worker is running more than one daemon pod Aug 13 18:11:28.808: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:28.812: INFO: Number of nodes with available pods: 0 Aug 13 18:11:28.812: INFO: Node kali-worker is running more than one daemon pod Aug 13 18:11:30.628: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:31.412: INFO: Number of nodes with available pods: 0 Aug 13 18:11:31.412: INFO: Node kali-worker is running more than one daemon pod Aug 13 18:11:32.185: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:32.189: INFO: Number of nodes with available pods: 0 Aug 13 18:11:32.189: INFO: Node kali-worker is running more than one daemon pod Aug 13 18:11:33.747: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:34.029: INFO: Number of nodes with available pods: 1 Aug 13 18:11:34.029: INFO: Node kali-worker is running more than one daemon pod Aug 13 18:11:34.936: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:35.193: INFO: Number of nodes with available pods: 1 Aug 13 18:11:35.193: INFO: Node kali-worker is running more than one daemon pod Aug 13 18:11:36.035: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:36.172: INFO: Number of nodes with available pods: 2 Aug 13 18:11:36.172: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Aug 13 18:11:37.903: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:37.903: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:38.697: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:39.868: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:39.868: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:40.347: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:40.867: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:40.867: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:41.257: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:41.943: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:41.943: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:41.943: INFO: Pod daemon-set-drs8h is not available Aug 13 18:11:42.657: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:42.952: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:42.952: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:42.952: INFO: Pod daemon-set-drs8h is not available Aug 13 18:11:43.027: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:43.740: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:43.740: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:43.740: INFO: Pod daemon-set-drs8h is not available Aug 13 18:11:43.752: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:44.819: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:44.819: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:44.819: INFO: Pod daemon-set-drs8h is not available Aug 13 18:11:44.823: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:45.703: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:45.703: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:45.703: INFO: Pod daemon-set-drs8h is not available Aug 13 18:11:45.706: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:46.703: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:46.703: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:46.703: INFO: Pod daemon-set-drs8h is not available Aug 13 18:11:46.707: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:47.704: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:47.704: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:47.704: INFO: Pod daemon-set-drs8h is not available Aug 13 18:11:47.708: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:48.703: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:48.703: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:48.703: INFO: Pod daemon-set-drs8h is not available Aug 13 18:11:48.709: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:49.701: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:49.701: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:49.701: INFO: Pod daemon-set-drs8h is not available Aug 13 18:11:49.705: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:50.701: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:50.701: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:50.701: INFO: Pod daemon-set-drs8h is not available Aug 13 18:11:50.705: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:51.729: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:51.729: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:51.729: INFO: Pod daemon-set-drs8h is not available Aug 13 18:11:51.734: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:52.702: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:52.702: INFO: Wrong image for pod: daemon-set-drs8h. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:52.702: INFO: Pod daemon-set-drs8h is not available Aug 13 18:11:52.707: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:53.716: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:53.716: INFO: Pod daemon-set-n9f8t is not available Aug 13 18:11:53.720: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:54.703: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:54.703: INFO: Pod daemon-set-n9f8t is not available Aug 13 18:11:54.707: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:55.758: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:55.758: INFO: Pod daemon-set-n9f8t is not available Aug 13 18:11:55.952: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:56.702: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:56.702: INFO: Pod daemon-set-n9f8t is not available Aug 13 18:11:56.706: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:57.879: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:58.035: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:58.702: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:58.707: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:11:59.702: INFO: Wrong image for pod: daemon-set-6nksv. Expected: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12, got: docker.io/library/httpd:2.4.38-alpine. Aug 13 18:11:59.702: INFO: Pod daemon-set-6nksv is not available Aug 13 18:11:59.706: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:12:00.702: INFO: Pod daemon-set-st8r5 is not available Aug 13 18:12:00.707: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Aug 13 18:12:00.711: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:12:00.714: INFO: Number of nodes with available pods: 1 Aug 13 18:12:00.714: INFO: Node kali-worker is running more than one daemon pod Aug 13 18:12:02.089: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:12:02.460: INFO: Number of nodes with available pods: 1 Aug 13 18:12:02.460: INFO: Node kali-worker is running more than one daemon pod Aug 13 18:12:02.762: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:12:02.765: INFO: Number of nodes with available pods: 1 Aug 13 18:12:02.765: INFO: Node kali-worker is running more than one daemon pod Aug 13 18:12:03.720: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:12:03.723: INFO: Number of nodes with available pods: 1 Aug 13 18:12:03.723: INFO: Node kali-worker is running more than one daemon pod Aug 13 18:12:05.059: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:12:05.062: INFO: Number of nodes with available pods: 2 Aug 13 18:12:05.062: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7339, will wait for the garbage collector to delete the pods Aug 13 18:12:05.522: INFO: Deleting DaemonSet.extensions daemon-set took: 121.560629ms Aug 13 18:12:06.222: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.282785ms Aug 13 18:12:13.333: INFO: Number of nodes with available pods: 0 Aug 13 18:12:13.333: INFO: Number of running nodes: 0, number of available pods: 0 Aug 13 18:12:13.335: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7339/daemonsets","resourceVersion":"9268949"},"items":null} Aug 13 18:12:13.399: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7339/pods","resourceVersion":"9268950"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:12:13.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7339" for this suite. • [SLOW TEST:45.775 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":275,"completed":5,"skipped":97,"failed":0} SS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:12:13.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name cm-test-opt-del-162ed540-4b76-4ef8-af9d-d968bfb1816e STEP: Creating configMap with name cm-test-opt-upd-9c4ca059-50d7-48f6-9037-815cfb4078f1 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-162ed540-4b76-4ef8-af9d-d968bfb1816e STEP: Updating configmap cm-test-opt-upd-9c4ca059-50d7-48f6-9037-815cfb4078f1 STEP: Creating configMap with name cm-test-opt-create-62c99af5-b3b5-4f20-ad0f-8e45f3ea506e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:13:51.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9663" for this suite. • [SLOW TEST:98.522 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":6,"skipped":99,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:13:51.940: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 13 18:13:52.238: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 13 18:13:57.295: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:13:58.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2909" for this suite. • [SLOW TEST:6.927 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":275,"completed":7,"skipped":111,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:13:58.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 13 18:14:01.239: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b152e417-8518-4052-8170-add284e3f0f8" in namespace "downward-api-5727" to be "Succeeded or Failed" Aug 13 18:14:01.309: INFO: Pod "downwardapi-volume-b152e417-8518-4052-8170-add284e3f0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 70.081325ms Aug 13 18:14:03.316: INFO: Pod "downwardapi-volume-b152e417-8518-4052-8170-add284e3f0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076387906s Aug 13 18:14:05.459: INFO: Pod "downwardapi-volume-b152e417-8518-4052-8170-add284e3f0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220068914s Aug 13 18:14:07.580: INFO: Pod "downwardapi-volume-b152e417-8518-4052-8170-add284e3f0f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.340625722s Aug 13 18:14:09.915: INFO: Pod "downwardapi-volume-b152e417-8518-4052-8170-add284e3f0f8": Phase="Running", Reason="", readiness=true. Elapsed: 8.675921233s Aug 13 18:14:11.951: INFO: Pod "downwardapi-volume-b152e417-8518-4052-8170-add284e3f0f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.712106327s STEP: Saw pod success Aug 13 18:14:11.952: INFO: Pod "downwardapi-volume-b152e417-8518-4052-8170-add284e3f0f8" satisfied condition "Succeeded or Failed" Aug 13 18:14:11.957: INFO: Trying to get logs from node kali-worker2 pod downwardapi-volume-b152e417-8518-4052-8170-add284e3f0f8 container client-container: STEP: delete the pod Aug 13 18:14:12.680: INFO: Waiting for pod downwardapi-volume-b152e417-8518-4052-8170-add284e3f0f8 to disappear Aug 13 18:14:13.094: INFO: Pod downwardapi-volume-b152e417-8518-4052-8170-add284e3f0f8 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:14:13.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5727" for this suite. • [SLOW TEST:14.234 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":275,"completed":8,"skipped":112,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:14:13.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:14:25.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-430" for this suite. • [SLOW TEST:12.000 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":275,"completed":9,"skipped":136,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:14:25.103: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 13 18:14:31.919: INFO: 10 pods remaining Aug 13 18:14:31.919: INFO: 10 pods has nil DeletionTimestamp Aug 13 18:14:31.919: INFO: Aug 13 18:14:33.562: INFO: 0 pods remaining Aug 13 18:14:33.562: INFO: 0 pods has nil DeletionTimestamp Aug 13 18:14:33.562: INFO: Aug 13 18:14:36.069: INFO: 0 pods remaining Aug 13 18:14:36.069: INFO: 0 pods has nil DeletionTimestamp Aug 13 18:14:36.069: INFO: STEP: Gathering metrics W0813 18:14:37.557169 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 13 18:14:37.557: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:14:37.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4897" for this suite. • [SLOW TEST:13.087 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":275,"completed":10,"skipped":155,"failed":0} SSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:14:38.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap configmap-6135/configmap-test-6ef0be35-057e-4794-aa9a-e42562147214 STEP: Creating a pod to test consume configMaps Aug 13 18:14:41.855: INFO: Waiting up to 5m0s for pod "pod-configmaps-67e70c07-8c00-47a5-884b-862f04c8aa5a" in namespace "configmap-6135" to be "Succeeded or Failed" Aug 13 18:14:42.119: INFO: Pod "pod-configmaps-67e70c07-8c00-47a5-884b-862f04c8aa5a": Phase="Pending", Reason="", readiness=false. Elapsed: 263.875294ms Aug 13 18:14:44.239: INFO: Pod "pod-configmaps-67e70c07-8c00-47a5-884b-862f04c8aa5a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383357714s Aug 13 18:14:46.571: INFO: Pod "pod-configmaps-67e70c07-8c00-47a5-884b-862f04c8aa5a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.715742384s Aug 13 18:14:48.636: INFO: Pod "pod-configmaps-67e70c07-8c00-47a5-884b-862f04c8aa5a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.780401628s Aug 13 18:14:50.823: INFO: Pod "pod-configmaps-67e70c07-8c00-47a5-884b-862f04c8aa5a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.967776748s STEP: Saw pod success Aug 13 18:14:50.823: INFO: Pod "pod-configmaps-67e70c07-8c00-47a5-884b-862f04c8aa5a" satisfied condition "Succeeded or Failed" Aug 13 18:14:51.023: INFO: Trying to get logs from node kali-worker pod pod-configmaps-67e70c07-8c00-47a5-884b-862f04c8aa5a container env-test: STEP: delete the pod Aug 13 18:14:51.465: INFO: Waiting for pod pod-configmaps-67e70c07-8c00-47a5-884b-862f04c8aa5a to disappear Aug 13 18:14:51.581: INFO: Pod pod-configmaps-67e70c07-8c00-47a5-884b-862f04c8aa5a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:14:51.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6135" for this suite. • [SLOW TEST:13.673 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":11,"skipped":161,"failed":0} SS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:14:51.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test env composition Aug 13 18:14:52.316: INFO: Waiting up to 5m0s for pod "var-expansion-d4aef9c3-dca1-4081-a8cf-f70aff78adfc" in namespace "var-expansion-3654" to be "Succeeded or Failed" Aug 13 18:14:52.836: INFO: Pod "var-expansion-d4aef9c3-dca1-4081-a8cf-f70aff78adfc": Phase="Pending", Reason="", readiness=false. Elapsed: 519.979276ms Aug 13 18:14:55.263: INFO: Pod "var-expansion-d4aef9c3-dca1-4081-a8cf-f70aff78adfc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.947083648s Aug 13 18:14:57.352: INFO: Pod "var-expansion-d4aef9c3-dca1-4081-a8cf-f70aff78adfc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.036489576s Aug 13 18:14:59.496: INFO: Pod "var-expansion-d4aef9c3-dca1-4081-a8cf-f70aff78adfc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.179525228s STEP: Saw pod success Aug 13 18:14:59.496: INFO: Pod "var-expansion-d4aef9c3-dca1-4081-a8cf-f70aff78adfc" satisfied condition "Succeeded or Failed" Aug 13 18:14:59.498: INFO: Trying to get logs from node kali-worker pod var-expansion-d4aef9c3-dca1-4081-a8cf-f70aff78adfc container dapi-container: STEP: delete the pod Aug 13 18:14:59.566: INFO: Waiting for pod var-expansion-d4aef9c3-dca1-4081-a8cf-f70aff78adfc to disappear Aug 13 18:14:59.572: INFO: Pod var-expansion-d4aef9c3-dca1-4081-a8cf-f70aff78adfc no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:14:59.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3654" for this suite. • [SLOW TEST:7.812 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":275,"completed":12,"skipped":163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:14:59.676: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's args Aug 13 18:14:59.809: INFO: Waiting up to 5m0s for pod "var-expansion-bc234071-ccf0-48c0-9412-b2791f54192c" in namespace "var-expansion-1053" to be "Succeeded or Failed" Aug 13 18:15:00.090: INFO: Pod "var-expansion-bc234071-ccf0-48c0-9412-b2791f54192c": Phase="Pending", Reason="", readiness=false. Elapsed: 280.033024ms Aug 13 18:15:02.161: INFO: Pod "var-expansion-bc234071-ccf0-48c0-9412-b2791f54192c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.351054839s Aug 13 18:15:04.165: INFO: Pod "var-expansion-bc234071-ccf0-48c0-9412-b2791f54192c": Phase="Running", Reason="", readiness=true. Elapsed: 4.355620092s Aug 13 18:15:06.170: INFO: Pod "var-expansion-bc234071-ccf0-48c0-9412-b2791f54192c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.360112525s STEP: Saw pod success Aug 13 18:15:06.170: INFO: Pod "var-expansion-bc234071-ccf0-48c0-9412-b2791f54192c" satisfied condition "Succeeded or Failed" Aug 13 18:15:06.173: INFO: Trying to get logs from node kali-worker pod var-expansion-bc234071-ccf0-48c0-9412-b2791f54192c container dapi-container: STEP: delete the pod Aug 13 18:15:06.213: INFO: Waiting for pod var-expansion-bc234071-ccf0-48c0-9412-b2791f54192c to disappear Aug 13 18:15:06.229: INFO: Pod var-expansion-bc234071-ccf0-48c0-9412-b2791f54192c no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:15:06.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1053" for this suite. • [SLOW TEST:6.562 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":275,"completed":13,"skipped":189,"failed":0} S ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:15:06.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod busybox-84e386a1-600d-4908-887f-369471d8be82 in namespace container-probe-9750 Aug 13 18:15:12.369: INFO: Started pod busybox-84e386a1-600d-4908-887f-369471d8be82 in namespace container-probe-9750 STEP: checking the pod's current state and verifying that restartCount is present Aug 13 18:15:12.371: INFO: Initial restart count of pod busybox-84e386a1-600d-4908-887f-369471d8be82 is 0 Aug 13 18:16:04.933: INFO: Restart count of pod container-probe-9750/busybox-84e386a1-600d-4908-887f-369471d8be82 is now 1 (52.562232417s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:16:05.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9750" for this suite. • [SLOW TEST:58.791 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":14,"skipped":190,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:16:05.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating secret secrets-8138/secret-test-22fa4441-bdf6-41ac-8dfc-c56c881fae50 STEP: Creating a pod to test consume secrets Aug 13 18:16:05.194: INFO: Waiting up to 5m0s for pod "pod-configmaps-078ad173-da9a-416f-9a5d-70ed55d15345" in namespace "secrets-8138" to be "Succeeded or Failed" Aug 13 18:16:05.201: INFO: Pod "pod-configmaps-078ad173-da9a-416f-9a5d-70ed55d15345": Phase="Pending", Reason="", readiness=false. Elapsed: 7.278041ms Aug 13 18:16:07.274: INFO: Pod "pod-configmaps-078ad173-da9a-416f-9a5d-70ed55d15345": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079674696s Aug 13 18:16:09.278: INFO: Pod "pod-configmaps-078ad173-da9a-416f-9a5d-70ed55d15345": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.083816681s STEP: Saw pod success Aug 13 18:16:09.278: INFO: Pod "pod-configmaps-078ad173-da9a-416f-9a5d-70ed55d15345" satisfied condition "Succeeded or Failed" Aug 13 18:16:09.281: INFO: Trying to get logs from node kali-worker pod pod-configmaps-078ad173-da9a-416f-9a5d-70ed55d15345 container env-test: STEP: delete the pod Aug 13 18:16:09.473: INFO: Waiting for pod pod-configmaps-078ad173-da9a-416f-9a5d-70ed55d15345 to disappear Aug 13 18:16:09.575: INFO: Pod pod-configmaps-078ad173-da9a-416f-9a5d-70ed55d15345 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:16:09.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8138" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":275,"completed":15,"skipped":196,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:16:09.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-ac6d3cb1-66fc-4a19-8c18-1b2a41356026 STEP: Creating a pod to test consume configMaps Aug 13 18:16:09.702: INFO: Waiting up to 5m0s for pod "pod-configmaps-cd9afa80-171f-472b-af27-9a7a7883a611" in namespace "configmap-5801" to be "Succeeded or Failed" Aug 13 18:16:09.713: INFO: Pod "pod-configmaps-cd9afa80-171f-472b-af27-9a7a7883a611": Phase="Pending", Reason="", readiness=false. Elapsed: 11.18994ms Aug 13 18:16:11.716: INFO: Pod "pod-configmaps-cd9afa80-171f-472b-af27-9a7a7883a611": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014212782s Aug 13 18:16:13.719: INFO: Pod "pod-configmaps-cd9afa80-171f-472b-af27-9a7a7883a611": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017629216s STEP: Saw pod success Aug 13 18:16:13.720: INFO: Pod "pod-configmaps-cd9afa80-171f-472b-af27-9a7a7883a611" satisfied condition "Succeeded or Failed" Aug 13 18:16:13.722: INFO: Trying to get logs from node kali-worker pod pod-configmaps-cd9afa80-171f-472b-af27-9a7a7883a611 container configmap-volume-test: STEP: delete the pod Aug 13 18:16:13.763: INFO: Waiting for pod pod-configmaps-cd9afa80-171f-472b-af27-9a7a7883a611 to disappear Aug 13 18:16:13.801: INFO: Pod pod-configmaps-cd9afa80-171f-472b-af27-9a7a7883a611 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:16:13.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5801" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":16,"skipped":231,"failed":0} ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:16:13.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7457.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7457.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 13 18:16:25.979: INFO: DNS probes using dns-7457/dns-test-cf60ecc6-205f-4dcc-a3a5-d50a6d3d81a2 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:16:26.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7457" for this suite. • [SLOW TEST:12.322 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":275,"completed":17,"skipped":231,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:16:26.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:16:43.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8844" for this suite. • [SLOW TEST:16.943 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":275,"completed":18,"skipped":288,"failed":0} SSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:16:43.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a replication controller Aug 13 18:16:43.196: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7127' Aug 13 18:16:47.060: INFO: stderr: "" Aug 13 18:16:47.060: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 13 18:16:47.060: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7127' Aug 13 18:16:47.216: INFO: stderr: "" Aug 13 18:16:47.216: INFO: stdout: "update-demo-nautilus-24r9g update-demo-nautilus-nc2cm " Aug 13 18:16:47.217: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24r9g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7127' Aug 13 18:16:47.320: INFO: stderr: "" Aug 13 18:16:47.320: INFO: stdout: "" Aug 13 18:16:47.320: INFO: update-demo-nautilus-24r9g is created but not running Aug 13 18:16:52.320: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7127' Aug 13 18:16:52.771: INFO: stderr: "" Aug 13 18:16:52.772: INFO: stdout: "update-demo-nautilus-24r9g update-demo-nautilus-nc2cm " Aug 13 18:16:52.772: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24r9g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7127' Aug 13 18:16:52.949: INFO: stderr: "" Aug 13 18:16:52.949: INFO: stdout: "" Aug 13 18:16:52.949: INFO: update-demo-nautilus-24r9g is created but not running Aug 13 18:16:57.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7127' Aug 13 18:16:58.066: INFO: stderr: "" Aug 13 18:16:58.066: INFO: stdout: "update-demo-nautilus-24r9g update-demo-nautilus-nc2cm " Aug 13 18:16:58.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24r9g -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7127' Aug 13 18:16:58.158: INFO: stderr: "" Aug 13 18:16:58.158: INFO: stdout: "true" Aug 13 18:16:58.158: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-24r9g -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7127' Aug 13 18:16:58.251: INFO: stderr: "" Aug 13 18:16:58.251: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 13 18:16:58.251: INFO: validating pod update-demo-nautilus-24r9g Aug 13 18:16:58.254: INFO: got data: { "image": "nautilus.jpg" } Aug 13 18:16:58.254: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 13 18:16:58.254: INFO: update-demo-nautilus-24r9g is verified up and running Aug 13 18:16:58.254: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nc2cm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7127' Aug 13 18:16:58.365: INFO: stderr: "" Aug 13 18:16:58.365: INFO: stdout: "true" Aug 13 18:16:58.365: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nc2cm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7127' Aug 13 18:16:58.456: INFO: stderr: "" Aug 13 18:16:58.456: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 13 18:16:58.456: INFO: validating pod update-demo-nautilus-nc2cm Aug 13 18:16:58.465: INFO: got data: { "image": "nautilus.jpg" } Aug 13 18:16:58.465: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 13 18:16:58.465: INFO: update-demo-nautilus-nc2cm is verified up and running STEP: using delete to clean up resources Aug 13 18:16:58.465: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7127' Aug 13 18:16:58.571: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 13 18:16:58.571: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 13 18:16:58.571: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7127' Aug 13 18:16:58.944: INFO: stderr: "No resources found in kubectl-7127 namespace.\n" Aug 13 18:16:58.944: INFO: stdout: "" Aug 13 18:16:58.944: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7127 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 13 18:16:59.321: INFO: stderr: "" Aug 13 18:16:59.321: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:16:59.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7127" for this suite. • [SLOW TEST:16.426 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":275,"completed":19,"skipped":295,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:16:59.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:16:59.983: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-a3c5c694-4ad5-417a-b53b-5d819e3803a0" in namespace "security-context-test-1664" to be "Succeeded or Failed" Aug 13 18:16:59.986: INFO: Pod "alpine-nnp-false-a3c5c694-4ad5-417a-b53b-5d819e3803a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.853597ms Aug 13 18:17:02.030: INFO: Pod "alpine-nnp-false-a3c5c694-4ad5-417a-b53b-5d819e3803a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046599974s Aug 13 18:17:04.036: INFO: Pod "alpine-nnp-false-a3c5c694-4ad5-417a-b53b-5d819e3803a0": Phase="Running", Reason="", readiness=true. Elapsed: 4.052381652s Aug 13 18:17:06.054: INFO: Pod "alpine-nnp-false-a3c5c694-4ad5-417a-b53b-5d819e3803a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070520123s Aug 13 18:17:06.054: INFO: Pod "alpine-nnp-false-a3c5c694-4ad5-417a-b53b-5d819e3803a0" satisfied condition "Succeeded or Failed" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:17:06.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1664" for this suite. • [SLOW TEST:6.770 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":20,"skipped":319,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:17:06.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 13 18:17:06.904: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bb60467f-ae2b-412b-8e2f-7c1f0ac13d2c" in namespace "downward-api-7338" to be "Succeeded or Failed" Aug 13 18:17:06.919: INFO: Pod "downwardapi-volume-bb60467f-ae2b-412b-8e2f-7c1f0ac13d2c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.401887ms Aug 13 18:17:08.922: INFO: Pod "downwardapi-volume-bb60467f-ae2b-412b-8e2f-7c1f0ac13d2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017827407s Aug 13 18:17:10.995: INFO: Pod "downwardapi-volume-bb60467f-ae2b-412b-8e2f-7c1f0ac13d2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090176405s Aug 13 18:17:13.178: INFO: Pod "downwardapi-volume-bb60467f-ae2b-412b-8e2f-7c1f0ac13d2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.274028598s STEP: Saw pod success Aug 13 18:17:13.179: INFO: Pod "downwardapi-volume-bb60467f-ae2b-412b-8e2f-7c1f0ac13d2c" satisfied condition "Succeeded or Failed" Aug 13 18:17:13.182: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-bb60467f-ae2b-412b-8e2f-7c1f0ac13d2c container client-container: STEP: delete the pod Aug 13 18:17:13.680: INFO: Waiting for pod downwardapi-volume-bb60467f-ae2b-412b-8e2f-7c1f0ac13d2c to disappear Aug 13 18:17:13.809: INFO: Pod downwardapi-volume-bb60467f-ae2b-412b-8e2f-7c1f0ac13d2c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:17:13.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7338" for this suite. • [SLOW TEST:7.605 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":21,"skipped":361,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:17:13.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 13 18:17:22.126: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-1754 PodName:pod-sharedvolume-f1521775-9f22-4ee8-86f3-d91ca8f60dfc ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 13 18:17:22.126: INFO: >>> kubeConfig: /root/.kube/config I0813 18:17:22.161686 7 log.go:172] (0xc002338000) (0xc002ac48c0) Create stream I0813 18:17:22.161730 7 log.go:172] (0xc002338000) (0xc002ac48c0) Stream added, broadcasting: 1 I0813 18:17:22.163980 7 log.go:172] (0xc002338000) Reply frame received for 1 I0813 18:17:22.164029 7 log.go:172] (0xc002338000) (0xc002a263c0) Create stream I0813 18:17:22.164048 7 log.go:172] (0xc002338000) (0xc002a263c0) Stream added, broadcasting: 3 I0813 18:17:22.165401 7 log.go:172] (0xc002338000) Reply frame received for 3 I0813 18:17:22.165459 7 log.go:172] (0xc002338000) (0xc001fab400) Create stream I0813 18:17:22.165493 7 log.go:172] (0xc002338000) (0xc001fab400) Stream added, broadcasting: 5 I0813 18:17:22.166767 7 log.go:172] (0xc002338000) Reply frame received for 5 I0813 18:17:22.225941 7 log.go:172] (0xc002338000) Data frame received for 5 I0813 18:17:22.225985 7 log.go:172] (0xc001fab400) (5) Data frame handling I0813 18:17:22.226015 7 log.go:172] (0xc002338000) Data frame received for 3 I0813 18:17:22.226031 7 log.go:172] (0xc002a263c0) (3) Data frame handling I0813 18:17:22.226056 7 log.go:172] (0xc002a263c0) (3) Data frame sent I0813 18:17:22.226076 7 log.go:172] (0xc002338000) Data frame received for 3 I0813 18:17:22.226085 7 log.go:172] (0xc002a263c0) (3) Data frame handling I0813 18:17:22.228277 7 log.go:172] (0xc002338000) Data frame received for 1 I0813 18:17:22.228291 7 log.go:172] (0xc002ac48c0) (1) Data frame handling I0813 18:17:22.228298 7 log.go:172] (0xc002ac48c0) (1) Data frame sent I0813 18:17:22.228313 7 log.go:172] (0xc002338000) (0xc002ac48c0) Stream removed, broadcasting: 1 I0813 18:17:22.228396 7 log.go:172] (0xc002338000) (0xc002ac48c0) Stream removed, broadcasting: 1 I0813 18:17:22.228408 7 log.go:172] (0xc002338000) (0xc002a263c0) Stream removed, broadcasting: 3 I0813 18:17:22.228440 7 log.go:172] (0xc002338000) Go away received I0813 18:17:22.228483 7 log.go:172] (0xc002338000) (0xc001fab400) Stream removed, broadcasting: 5 Aug 13 18:17:22.228: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:17:22.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1754" for this suite. • [SLOW TEST:8.356 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":275,"completed":22,"skipped":364,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:17:22.235: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test hostPath mode Aug 13 18:17:22.332: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1130" to be "Succeeded or Failed" Aug 13 18:17:22.354: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 22.576298ms Aug 13 18:17:24.357: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025560312s Aug 13 18:17:26.361: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029446097s Aug 13 18:17:28.365: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033393744s STEP: Saw pod success Aug 13 18:17:28.365: INFO: Pod "pod-host-path-test" satisfied condition "Succeeded or Failed" Aug 13 18:17:28.368: INFO: Trying to get logs from node kali-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Aug 13 18:17:28.548: INFO: Waiting for pod pod-host-path-test to disappear Aug 13 18:17:28.677: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:17:28.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1130" for this suite. • [SLOW TEST:6.452 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":23,"skipped":395,"failed":0} [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:17:28.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:171 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating server pod server in namespace prestop-897 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-897 STEP: Deleting pre-stop pod Aug 13 18:17:43.915: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:17:43.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-897" for this suite. • [SLOW TEST:15.326 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":275,"completed":24,"skipped":395,"failed":0} [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:17:44.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: getting the auto-created API token STEP: reading a file in the container Aug 13 18:17:49.123: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6288 pod-service-account-6bee6410-9173-4006-b74d-4145543452d3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Aug 13 18:17:49.362: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6288 pod-service-account-6bee6410-9173-4006-b74d-4145543452d3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Aug 13 18:17:49.569: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-6288 pod-service-account-6bee6410-9173-4006-b74d-4145543452d3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:17:49.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6288" for this suite. • [SLOW TEST:5.928 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":275,"completed":25,"skipped":395,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:17:49.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-b04925dd-92eb-4d23-9f30-4b2a3211a153 STEP: Creating a pod to test consume configMaps Aug 13 18:17:50.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-ac0f3ccd-83ca-4eed-b5f1-583025f1a23e" in namespace "configmap-2467" to be "Succeeded or Failed" Aug 13 18:17:50.409: INFO: Pod "pod-configmaps-ac0f3ccd-83ca-4eed-b5f1-583025f1a23e": Phase="Pending", Reason="", readiness=false. Elapsed: 34.985073ms Aug 13 18:17:52.415: INFO: Pod "pod-configmaps-ac0f3ccd-83ca-4eed-b5f1-583025f1a23e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040919183s Aug 13 18:17:54.498: INFO: Pod "pod-configmaps-ac0f3ccd-83ca-4eed-b5f1-583025f1a23e": Phase="Running", Reason="", readiness=true. Elapsed: 4.123879898s Aug 13 18:17:56.501: INFO: Pod "pod-configmaps-ac0f3ccd-83ca-4eed-b5f1-583025f1a23e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12659881s STEP: Saw pod success Aug 13 18:17:56.501: INFO: Pod "pod-configmaps-ac0f3ccd-83ca-4eed-b5f1-583025f1a23e" satisfied condition "Succeeded or Failed" Aug 13 18:17:56.503: INFO: Trying to get logs from node kali-worker pod pod-configmaps-ac0f3ccd-83ca-4eed-b5f1-583025f1a23e container configmap-volume-test: STEP: delete the pod Aug 13 18:17:56.552: INFO: Waiting for pod pod-configmaps-ac0f3ccd-83ca-4eed-b5f1-583025f1a23e to disappear Aug 13 18:17:56.565: INFO: Pod pod-configmaps-ac0f3ccd-83ca-4eed-b5f1-583025f1a23e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:17:56.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2467" for this suite. • [SLOW TEST:6.630 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":26,"skipped":443,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:17:56.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-map-932be7a5-4d12-46ad-b84c-ff4d1de78354 STEP: Creating a pod to test consume configMaps Aug 13 18:17:56.949: INFO: Waiting up to 5m0s for pod "pod-configmaps-6b598769-9b9b-44aa-bc66-0381518209a9" in namespace "configmap-6173" to be "Succeeded or Failed" Aug 13 18:17:56.954: INFO: Pod "pod-configmaps-6b598769-9b9b-44aa-bc66-0381518209a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149517ms Aug 13 18:17:59.085: INFO: Pod "pod-configmaps-6b598769-9b9b-44aa-bc66-0381518209a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135318662s Aug 13 18:18:01.163: INFO: Pod "pod-configmaps-6b598769-9b9b-44aa-bc66-0381518209a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213506135s Aug 13 18:18:03.167: INFO: Pod "pod-configmaps-6b598769-9b9b-44aa-bc66-0381518209a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.218092254s STEP: Saw pod success Aug 13 18:18:03.168: INFO: Pod "pod-configmaps-6b598769-9b9b-44aa-bc66-0381518209a9" satisfied condition "Succeeded or Failed" Aug 13 18:18:03.170: INFO: Trying to get logs from node kali-worker pod pod-configmaps-6b598769-9b9b-44aa-bc66-0381518209a9 container configmap-volume-test: STEP: delete the pod Aug 13 18:18:03.187: INFO: Waiting for pod pod-configmaps-6b598769-9b9b-44aa-bc66-0381518209a9 to disappear Aug 13 18:18:03.191: INFO: Pod pod-configmaps-6b598769-9b9b-44aa-bc66-0381518209a9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:18:03.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6173" for this suite. • [SLOW TEST:6.624 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":27,"skipped":457,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:18:03.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with configMap that has name projected-configmap-test-upd-0b7f266b-260e-4bd9-b15e-448db23938dc STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-0b7f266b-260e-4bd9-b15e-448db23938dc STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:18:09.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6932" for this suite. • [SLOW TEST:6.597 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":28,"skipped":460,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:18:09.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:18:10.355: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:18:16.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4651" for this suite. • [SLOW TEST:7.055 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":275,"completed":29,"skipped":468,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:18:16.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-3996 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating stateful set ss in namespace statefulset-3996 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3996 Aug 13 18:18:17.073: INFO: Found 0 stateful pods, waiting for 1 Aug 13 18:18:27.103: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Aug 13 18:18:27.107: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3996 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 13 18:18:27.404: INFO: stderr: "I0813 18:18:27.251261 371 log.go:172] (0xc00003ad10) (0xc0007421e0) Create stream\nI0813 18:18:27.251346 371 log.go:172] (0xc00003ad10) (0xc0007421e0) Stream added, broadcasting: 1\nI0813 18:18:27.255272 371 log.go:172] (0xc00003ad10) Reply frame received for 1\nI0813 18:18:27.255312 371 log.go:172] (0xc00003ad10) (0xc00063f2c0) Create stream\nI0813 18:18:27.255324 371 log.go:172] (0xc00003ad10) (0xc00063f2c0) Stream added, broadcasting: 3\nI0813 18:18:27.256338 371 log.go:172] (0xc00003ad10) Reply frame received for 3\nI0813 18:18:27.256392 371 log.go:172] (0xc00003ad10) (0xc00063f4a0) Create stream\nI0813 18:18:27.256412 371 log.go:172] (0xc00003ad10) (0xc00063f4a0) Stream added, broadcasting: 5\nI0813 18:18:27.257378 371 log.go:172] (0xc00003ad10) Reply frame received for 5\nI0813 18:18:27.340594 371 log.go:172] (0xc00003ad10) Data frame received for 5\nI0813 18:18:27.340625 371 log.go:172] (0xc00063f4a0) (5) Data frame handling\nI0813 18:18:27.340644 371 log.go:172] (0xc00063f4a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0813 18:18:27.394112 371 log.go:172] (0xc00003ad10) Data frame received for 5\nI0813 18:18:27.394139 371 log.go:172] (0xc00063f4a0) (5) Data frame handling\nI0813 18:18:27.394206 371 log.go:172] (0xc00003ad10) Data frame received for 3\nI0813 18:18:27.394240 371 log.go:172] (0xc00063f2c0) (3) Data frame handling\nI0813 18:18:27.394279 371 log.go:172] (0xc00063f2c0) (3) Data frame sent\nI0813 18:18:27.394307 371 log.go:172] (0xc00003ad10) Data frame received for 3\nI0813 18:18:27.394327 371 log.go:172] (0xc00063f2c0) (3) Data frame handling\nI0813 18:18:27.396199 371 log.go:172] (0xc00003ad10) Data frame received for 1\nI0813 18:18:27.396220 371 log.go:172] (0xc0007421e0) (1) Data frame handling\nI0813 18:18:27.396233 371 log.go:172] (0xc0007421e0) (1) Data frame sent\nI0813 18:18:27.396240 371 log.go:172] (0xc00003ad10) (0xc0007421e0) Stream removed, broadcasting: 1\nI0813 18:18:27.396457 371 log.go:172] (0xc00003ad10) Go away received\nI0813 18:18:27.396493 371 log.go:172] (0xc00003ad10) (0xc0007421e0) Stream removed, broadcasting: 1\nI0813 18:18:27.396567 371 log.go:172] Streams opened: 2, map[spdy.StreamId]*spdystream.Stream{0x3:(*spdystream.Stream)(0xc00063f2c0), 0x5:(*spdystream.Stream)(0xc00063f4a0)}\nI0813 18:18:27.396613 371 log.go:172] (0xc00003ad10) (0xc00063f2c0) Stream removed, broadcasting: 3\nI0813 18:18:27.396648 371 log.go:172] (0xc00003ad10) (0xc00063f4a0) Stream removed, broadcasting: 5\n" Aug 13 18:18:27.405: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 13 18:18:27.405: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 13 18:18:27.408: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 13 18:18:38.233: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 13 18:18:38.233: INFO: Waiting for statefulset status.replicas updated to 0 Aug 13 18:18:39.117: INFO: POD NODE PHASE GRACE CONDITIONS Aug 13 18:18:39.117: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:28 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:17 +0000 UTC }] Aug 13 18:18:39.117: INFO: ss-1 Pending [] Aug 13 18:18:39.118: INFO: Aug 13 18:18:39.118: INFO: StatefulSet ss has not reached scale 3, at 2 Aug 13 18:18:40.700: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.83697231s Aug 13 18:18:41.738: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.254715126s Aug 13 18:18:42.742: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.216832288s Aug 13 18:18:43.846: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.212564459s Aug 13 18:18:45.008: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.108436218s Aug 13 18:18:46.011: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.946463324s Aug 13 18:18:47.015: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.943576758s Aug 13 18:18:48.027: INFO: Verifying statefulset ss doesn't scale past 3 for another 939.301195ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3996 Aug 13 18:18:49.493: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3996 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 13 18:18:49.748: INFO: stderr: "I0813 18:18:49.678465 393 log.go:172] (0xc00003a4d0) (0xc0004b8be0) Create stream\nI0813 18:18:49.678518 393 log.go:172] (0xc00003a4d0) (0xc0004b8be0) Stream added, broadcasting: 1\nI0813 18:18:49.681291 393 log.go:172] (0xc00003a4d0) Reply frame received for 1\nI0813 18:18:49.681332 393 log.go:172] (0xc00003a4d0) (0xc000888000) Create stream\nI0813 18:18:49.681342 393 log.go:172] (0xc00003a4d0) (0xc000888000) Stream added, broadcasting: 3\nI0813 18:18:49.682216 393 log.go:172] (0xc00003a4d0) Reply frame received for 3\nI0813 18:18:49.682265 393 log.go:172] (0xc00003a4d0) (0xc0008880a0) Create stream\nI0813 18:18:49.682287 393 log.go:172] (0xc00003a4d0) (0xc0008880a0) Stream added, broadcasting: 5\nI0813 18:18:49.683111 393 log.go:172] (0xc00003a4d0) Reply frame received for 5\nI0813 18:18:49.739188 393 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0813 18:18:49.739236 393 log.go:172] (0xc0008880a0) (5) Data frame handling\nI0813 18:18:49.739252 393 log.go:172] (0xc0008880a0) (5) Data frame sent\nI0813 18:18:49.739262 393 log.go:172] (0xc00003a4d0) Data frame received for 5\nI0813 18:18:49.739276 393 log.go:172] (0xc0008880a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0813 18:18:49.739322 393 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0813 18:18:49.739350 393 log.go:172] (0xc000888000) (3) Data frame handling\nI0813 18:18:49.739377 393 log.go:172] (0xc000888000) (3) Data frame sent\nI0813 18:18:49.739391 393 log.go:172] (0xc00003a4d0) Data frame received for 3\nI0813 18:18:49.739402 393 log.go:172] (0xc000888000) (3) Data frame handling\nI0813 18:18:49.740646 393 log.go:172] (0xc00003a4d0) Data frame received for 1\nI0813 18:18:49.740678 393 log.go:172] (0xc0004b8be0) (1) Data frame handling\nI0813 18:18:49.740699 393 log.go:172] (0xc0004b8be0) (1) Data frame sent\nI0813 18:18:49.740853 393 log.go:172] (0xc00003a4d0) (0xc0004b8be0) Stream removed, broadcasting: 1\nI0813 18:18:49.740917 393 log.go:172] (0xc00003a4d0) Go away received\nI0813 18:18:49.741282 393 log.go:172] (0xc00003a4d0) (0xc0004b8be0) Stream removed, broadcasting: 1\nI0813 18:18:49.741310 393 log.go:172] (0xc00003a4d0) (0xc000888000) Stream removed, broadcasting: 3\nI0813 18:18:49.741325 393 log.go:172] (0xc00003a4d0) (0xc0008880a0) Stream removed, broadcasting: 5\n" Aug 13 18:18:49.748: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 13 18:18:49.748: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 13 18:18:49.748: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3996 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 13 18:18:50.092: INFO: stderr: "I0813 18:18:50.023940 413 log.go:172] (0xc00003bb80) (0xc00081b4a0) Create stream\nI0813 18:18:50.024039 413 log.go:172] (0xc00003bb80) (0xc00081b4a0) Stream added, broadcasting: 1\nI0813 18:18:50.027243 413 log.go:172] (0xc00003bb80) Reply frame received for 1\nI0813 18:18:50.027296 413 log.go:172] (0xc00003bb80) (0xc000b40000) Create stream\nI0813 18:18:50.027314 413 log.go:172] (0xc00003bb80) (0xc000b40000) Stream added, broadcasting: 3\nI0813 18:18:50.028191 413 log.go:172] (0xc00003bb80) Reply frame received for 3\nI0813 18:18:50.028230 413 log.go:172] (0xc00003bb80) (0xc000b400a0) Create stream\nI0813 18:18:50.028238 413 log.go:172] (0xc00003bb80) (0xc000b400a0) Stream added, broadcasting: 5\nI0813 18:18:50.029323 413 log.go:172] (0xc00003bb80) Reply frame received for 5\nI0813 18:18:50.084474 413 log.go:172] (0xc00003bb80) Data frame received for 3\nI0813 18:18:50.084496 413 log.go:172] (0xc000b40000) (3) Data frame handling\nI0813 18:18:50.084510 413 log.go:172] (0xc000b40000) (3) Data frame sent\nI0813 18:18:50.084668 413 log.go:172] (0xc00003bb80) Data frame received for 5\nI0813 18:18:50.084694 413 log.go:172] (0xc000b400a0) (5) Data frame handling\nI0813 18:18:50.084703 413 log.go:172] (0xc000b400a0) (5) Data frame sent\nI0813 18:18:50.084711 413 log.go:172] (0xc00003bb80) Data frame received for 5\nI0813 18:18:50.084719 413 log.go:172] (0xc000b400a0) (5) Data frame handling\nI0813 18:18:50.084861 413 log.go:172] (0xc00003bb80) Data frame received for 3\nI0813 18:18:50.084883 413 log.go:172] (0xc000b40000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0813 18:18:50.085800 413 log.go:172] (0xc00003bb80) Data frame received for 1\nI0813 18:18:50.085823 413 log.go:172] (0xc00081b4a0) (1) Data frame handling\nI0813 18:18:50.085838 413 log.go:172] (0xc00081b4a0) (1) Data frame sent\nI0813 18:18:50.085851 413 log.go:172] (0xc00003bb80) (0xc00081b4a0) Stream removed, broadcasting: 1\nI0813 18:18:50.085868 413 log.go:172] (0xc00003bb80) Go away received\nI0813 18:18:50.086223 413 log.go:172] (0xc00003bb80) (0xc00081b4a0) Stream removed, broadcasting: 1\nI0813 18:18:50.086244 413 log.go:172] (0xc00003bb80) (0xc000b40000) Stream removed, broadcasting: 3\nI0813 18:18:50.086251 413 log.go:172] (0xc00003bb80) (0xc000b400a0) Stream removed, broadcasting: 5\n" Aug 13 18:18:50.092: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 13 18:18:50.092: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 13 18:18:50.092: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3996 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Aug 13 18:18:50.389: INFO: stderr: "I0813 18:18:50.320087 433 log.go:172] (0xc00044e2c0) (0xc000a860a0) Create stream\nI0813 18:18:50.320134 433 log.go:172] (0xc00044e2c0) (0xc000a860a0) Stream added, broadcasting: 1\nI0813 18:18:50.322233 433 log.go:172] (0xc00044e2c0) Reply frame received for 1\nI0813 18:18:50.322263 433 log.go:172] (0xc00044e2c0) (0xc000a4a000) Create stream\nI0813 18:18:50.322270 433 log.go:172] (0xc00044e2c0) (0xc000a4a000) Stream added, broadcasting: 3\nI0813 18:18:50.323027 433 log.go:172] (0xc00044e2c0) Reply frame received for 3\nI0813 18:18:50.323053 433 log.go:172] (0xc00044e2c0) (0xc000a4a0a0) Create stream\nI0813 18:18:50.323061 433 log.go:172] (0xc00044e2c0) (0xc000a4a0a0) Stream added, broadcasting: 5\nI0813 18:18:50.323821 433 log.go:172] (0xc00044e2c0) Reply frame received for 5\nI0813 18:18:50.380854 433 log.go:172] (0xc00044e2c0) Data frame received for 5\nI0813 18:18:50.380888 433 log.go:172] (0xc000a4a0a0) (5) Data frame handling\nI0813 18:18:50.380898 433 log.go:172] (0xc000a4a0a0) (5) Data frame sent\nI0813 18:18:50.380908 433 log.go:172] (0xc00044e2c0) Data frame received for 5\nI0813 18:18:50.380918 433 log.go:172] (0xc000a4a0a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0813 18:18:50.380945 433 log.go:172] (0xc00044e2c0) Data frame received for 3\nI0813 18:18:50.380957 433 log.go:172] (0xc000a4a000) (3) Data frame handling\nI0813 18:18:50.380975 433 log.go:172] (0xc000a4a000) (3) Data frame sent\nI0813 18:18:50.380990 433 log.go:172] (0xc00044e2c0) Data frame received for 3\nI0813 18:18:50.381008 433 log.go:172] (0xc000a4a000) (3) Data frame handling\nI0813 18:18:50.382220 433 log.go:172] (0xc00044e2c0) Data frame received for 1\nI0813 18:18:50.382249 433 log.go:172] (0xc000a860a0) (1) Data frame handling\nI0813 18:18:50.382268 433 log.go:172] (0xc000a860a0) (1) Data frame sent\nI0813 18:18:50.382296 433 log.go:172] (0xc00044e2c0) (0xc000a860a0) Stream removed, broadcasting: 1\nI0813 18:18:50.382445 433 log.go:172] (0xc00044e2c0) Go away received\nI0813 18:18:50.382846 433 log.go:172] (0xc00044e2c0) (0xc000a860a0) Stream removed, broadcasting: 1\nI0813 18:18:50.382866 433 log.go:172] (0xc00044e2c0) (0xc000a4a000) Stream removed, broadcasting: 3\nI0813 18:18:50.382877 433 log.go:172] (0xc00044e2c0) (0xc000a4a0a0) Stream removed, broadcasting: 5\n" Aug 13 18:18:50.389: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Aug 13 18:18:50.389: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Aug 13 18:18:50.433: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 13 18:18:50.433: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 13 18:18:50.433: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Aug 13 18:18:50.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3996 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 13 18:18:50.642: INFO: stderr: "I0813 18:18:50.571217 454 log.go:172] (0xc000a3c000) (0xc0004d0000) Create stream\nI0813 18:18:50.571264 454 log.go:172] (0xc000a3c000) (0xc0004d0000) Stream added, broadcasting: 1\nI0813 18:18:50.573579 454 log.go:172] (0xc000a3c000) Reply frame received for 1\nI0813 18:18:50.573601 454 log.go:172] (0xc000a3c000) (0xc0004d0140) Create stream\nI0813 18:18:50.573607 454 log.go:172] (0xc000a3c000) (0xc0004d0140) Stream added, broadcasting: 3\nI0813 18:18:50.574194 454 log.go:172] (0xc000a3c000) Reply frame received for 3\nI0813 18:18:50.574224 454 log.go:172] (0xc000a3c000) (0xc000634000) Create stream\nI0813 18:18:50.574241 454 log.go:172] (0xc000a3c000) (0xc000634000) Stream added, broadcasting: 5\nI0813 18:18:50.575010 454 log.go:172] (0xc000a3c000) Reply frame received for 5\nI0813 18:18:50.633282 454 log.go:172] (0xc000a3c000) Data frame received for 5\nI0813 18:18:50.633310 454 log.go:172] (0xc000634000) (5) Data frame handling\nI0813 18:18:50.633319 454 log.go:172] (0xc000634000) (5) Data frame sent\nI0813 18:18:50.633326 454 log.go:172] (0xc000a3c000) Data frame received for 5\nI0813 18:18:50.633331 454 log.go:172] (0xc000634000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0813 18:18:50.633353 454 log.go:172] (0xc000a3c000) Data frame received for 3\nI0813 18:18:50.633360 454 log.go:172] (0xc0004d0140) (3) Data frame handling\nI0813 18:18:50.633366 454 log.go:172] (0xc0004d0140) (3) Data frame sent\nI0813 18:18:50.633426 454 log.go:172] (0xc000a3c000) Data frame received for 3\nI0813 18:18:50.633444 454 log.go:172] (0xc0004d0140) (3) Data frame handling\nI0813 18:18:50.634699 454 log.go:172] (0xc000a3c000) Data frame received for 1\nI0813 18:18:50.634716 454 log.go:172] (0xc0004d0000) (1) Data frame handling\nI0813 18:18:50.634748 454 log.go:172] (0xc0004d0000) (1) Data frame sent\nI0813 18:18:50.634990 454 log.go:172] (0xc000a3c000) (0xc0004d0000) Stream removed, broadcasting: 1\nI0813 18:18:50.635034 454 log.go:172] (0xc000a3c000) Go away received\nI0813 18:18:50.635523 454 log.go:172] (0xc000a3c000) (0xc0004d0000) Stream removed, broadcasting: 1\nI0813 18:18:50.635544 454 log.go:172] (0xc000a3c000) (0xc0004d0140) Stream removed, broadcasting: 3\nI0813 18:18:50.635554 454 log.go:172] (0xc000a3c000) (0xc000634000) Stream removed, broadcasting: 5\n" Aug 13 18:18:50.642: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 13 18:18:50.642: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 13 18:18:50.642: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3996 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 13 18:18:50.886: INFO: stderr: "I0813 18:18:50.768948 474 log.go:172] (0xc000994a50) (0xc0005e9680) Create stream\nI0813 18:18:50.769001 474 log.go:172] (0xc000994a50) (0xc0005e9680) Stream added, broadcasting: 1\nI0813 18:18:50.771757 474 log.go:172] (0xc000994a50) Reply frame received for 1\nI0813 18:18:50.771800 474 log.go:172] (0xc000994a50) (0xc000898000) Create stream\nI0813 18:18:50.771816 474 log.go:172] (0xc000994a50) (0xc000898000) Stream added, broadcasting: 3\nI0813 18:18:50.772898 474 log.go:172] (0xc000994a50) Reply frame received for 3\nI0813 18:18:50.772933 474 log.go:172] (0xc000994a50) (0xc000942000) Create stream\nI0813 18:18:50.772945 474 log.go:172] (0xc000994a50) (0xc000942000) Stream added, broadcasting: 5\nI0813 18:18:50.773871 474 log.go:172] (0xc000994a50) Reply frame received for 5\nI0813 18:18:50.843587 474 log.go:172] (0xc000994a50) Data frame received for 5\nI0813 18:18:50.843605 474 log.go:172] (0xc000942000) (5) Data frame handling\nI0813 18:18:50.843616 474 log.go:172] (0xc000942000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0813 18:18:50.875875 474 log.go:172] (0xc000994a50) Data frame received for 3\nI0813 18:18:50.875920 474 log.go:172] (0xc000898000) (3) Data frame handling\nI0813 18:18:50.875969 474 log.go:172] (0xc000898000) (3) Data frame sent\nI0813 18:18:50.876270 474 log.go:172] (0xc000994a50) Data frame received for 3\nI0813 18:18:50.876314 474 log.go:172] (0xc000898000) (3) Data frame handling\nI0813 18:18:50.876350 474 log.go:172] (0xc000994a50) Data frame received for 5\nI0813 18:18:50.876395 474 log.go:172] (0xc000942000) (5) Data frame handling\nI0813 18:18:50.878086 474 log.go:172] (0xc000994a50) Data frame received for 1\nI0813 18:18:50.878124 474 log.go:172] (0xc0005e9680) (1) Data frame handling\nI0813 18:18:50.878147 474 log.go:172] (0xc0005e9680) (1) Data frame sent\nI0813 18:18:50.878174 474 log.go:172] (0xc000994a50) (0xc0005e9680) Stream removed, broadcasting: 1\nI0813 18:18:50.878202 474 log.go:172] (0xc000994a50) Go away received\nI0813 18:18:50.878708 474 log.go:172] (0xc000994a50) (0xc0005e9680) Stream removed, broadcasting: 1\nI0813 18:18:50.878755 474 log.go:172] (0xc000994a50) (0xc000898000) Stream removed, broadcasting: 3\nI0813 18:18:50.878775 474 log.go:172] (0xc000994a50) (0xc000942000) Stream removed, broadcasting: 5\n" Aug 13 18:18:50.886: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 13 18:18:50.886: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 13 18:18:50.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-3996 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Aug 13 18:18:51.154: INFO: stderr: "I0813 18:18:51.013797 496 log.go:172] (0xc00096b3f0) (0xc000767b80) Create stream\nI0813 18:18:51.013854 496 log.go:172] (0xc00096b3f0) (0xc000767b80) Stream added, broadcasting: 1\nI0813 18:18:51.020964 496 log.go:172] (0xc00096b3f0) Reply frame received for 1\nI0813 18:18:51.021017 496 log.go:172] (0xc00096b3f0) (0xc00066f5e0) Create stream\nI0813 18:18:51.021032 496 log.go:172] (0xc00096b3f0) (0xc00066f5e0) Stream added, broadcasting: 3\nI0813 18:18:51.023987 496 log.go:172] (0xc00096b3f0) Reply frame received for 3\nI0813 18:18:51.024014 496 log.go:172] (0xc00096b3f0) (0xc00051ea00) Create stream\nI0813 18:18:51.024022 496 log.go:172] (0xc00096b3f0) (0xc00051ea00) Stream added, broadcasting: 5\nI0813 18:18:51.024865 496 log.go:172] (0xc00096b3f0) Reply frame received for 5\nI0813 18:18:51.090684 496 log.go:172] (0xc00096b3f0) Data frame received for 5\nI0813 18:18:51.090746 496 log.go:172] (0xc00051ea00) (5) Data frame handling\nI0813 18:18:51.090780 496 log.go:172] (0xc00051ea00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0813 18:18:51.146336 496 log.go:172] (0xc00096b3f0) Data frame received for 3\nI0813 18:18:51.146360 496 log.go:172] (0xc00066f5e0) (3) Data frame handling\nI0813 18:18:51.146372 496 log.go:172] (0xc00066f5e0) (3) Data frame sent\nI0813 18:18:51.146377 496 log.go:172] (0xc00096b3f0) Data frame received for 3\nI0813 18:18:51.146381 496 log.go:172] (0xc00066f5e0) (3) Data frame handling\nI0813 18:18:51.147810 496 log.go:172] (0xc00096b3f0) Data frame received for 5\nI0813 18:18:51.147826 496 log.go:172] (0xc00051ea00) (5) Data frame handling\nI0813 18:18:51.148197 496 log.go:172] (0xc00096b3f0) Data frame received for 1\nI0813 18:18:51.148208 496 log.go:172] (0xc000767b80) (1) Data frame handling\nI0813 18:18:51.148218 496 log.go:172] (0xc000767b80) (1) Data frame sent\nI0813 18:18:51.148227 496 log.go:172] (0xc00096b3f0) (0xc000767b80) Stream removed, broadcasting: 1\nI0813 18:18:51.148351 496 log.go:172] (0xc00096b3f0) Go away received\nI0813 18:18:51.148513 496 log.go:172] (0xc00096b3f0) (0xc000767b80) Stream removed, broadcasting: 1\nI0813 18:18:51.148527 496 log.go:172] (0xc00096b3f0) (0xc00066f5e0) Stream removed, broadcasting: 3\nI0813 18:18:51.148533 496 log.go:172] (0xc00096b3f0) (0xc00051ea00) Stream removed, broadcasting: 5\n" Aug 13 18:18:51.154: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Aug 13 18:18:51.154: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Aug 13 18:18:51.154: INFO: Waiting for statefulset status.replicas updated to 0 Aug 13 18:18:51.157: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Aug 13 18:19:01.166: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 13 18:19:01.166: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 13 18:19:01.166: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 13 18:19:01.186: INFO: POD NODE PHASE GRACE CONDITIONS Aug 13 18:19:01.186: INFO: ss-0 kali-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:17 +0000 UTC }] Aug 13 18:19:01.186: INFO: ss-1 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:38 +0000 UTC }] Aug 13 18:19:01.186: INFO: ss-2 kali-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:39 +0000 UTC }] Aug 13 18:19:01.186: INFO: Aug 13 18:19:01.186: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 13 18:19:02.611: INFO: POD NODE PHASE GRACE CONDITIONS Aug 13 18:19:02.611: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:17 +0000 UTC }] Aug 13 18:19:02.611: INFO: ss-1 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:38 +0000 UTC }] Aug 13 18:19:02.611: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:39 +0000 UTC }] Aug 13 18:19:02.611: INFO: Aug 13 18:19:02.611: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 13 18:19:03.829: INFO: POD NODE PHASE GRACE CONDITIONS Aug 13 18:19:03.829: INFO: ss-0 kali-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:17 +0000 UTC }] Aug 13 18:19:03.829: INFO: ss-1 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:38 +0000 UTC }] Aug 13 18:19:03.829: INFO: ss-2 kali-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:39 +0000 UTC }] Aug 13 18:19:03.829: INFO: Aug 13 18:19:03.829: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 13 18:19:04.833: INFO: POD NODE PHASE GRACE CONDITIONS Aug 13 18:19:04.833: INFO: ss-0 kali-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:17 +0000 UTC }] Aug 13 18:19:04.833: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:38 +0000 UTC }] Aug 13 18:19:04.833: INFO: ss-2 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:39 +0000 UTC }] Aug 13 18:19:04.833: INFO: Aug 13 18:19:04.833: INFO: StatefulSet ss has not reached scale 0, at 3 Aug 13 18:19:05.847: INFO: POD NODE PHASE GRACE CONDITIONS Aug 13 18:19:05.847: INFO: ss-1 kali-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:51 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-13 18:18:38 +0000 UTC }] Aug 13 18:19:05.847: INFO: Aug 13 18:19:05.847: INFO: StatefulSet ss has not reached scale 0, at 1 Aug 13 18:19:06.852: INFO: Verifying statefulset ss doesn't scale past 0 for another 4.326341799s Aug 13 18:19:07.855: INFO: Verifying statefulset ss doesn't scale past 0 for another 3.321944113s Aug 13 18:19:08.859: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.318288696s Aug 13 18:19:09.862: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.31501513s Aug 13 18:19:10.866: INFO: Verifying statefulset ss doesn't scale past 0 for another 311.493038ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3996 Aug 13 18:19:11.870: INFO: Scaling statefulset ss to 0 Aug 13 18:19:11.877: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Aug 13 18:19:11.878: INFO: Deleting all statefulset in ns statefulset-3996 Aug 13 18:19:11.880: INFO: Scaling statefulset ss to 0 Aug 13 18:19:11.887: INFO: Waiting for statefulset status.replicas updated to 0 Aug 13 18:19:11.889: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:19:11.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3996" for this suite. • [SLOW TEST:55.077 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":275,"completed":30,"skipped":469,"failed":0} SSSSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:19:11.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... Aug 13 18:19:12.083: INFO: Created pod &Pod{ObjectMeta:{dns-4051 dns-4051 /api/v1/namespaces/dns-4051/pods/dns-4051 b77c129f-038a-4b52-a1bf-4c4cef5cb4ca 9271633 0 2020-08-13 18:19:12 +0000 UTC map[] map[] [] [] [{e2e.test Update v1 2020-08-13 18:19:12 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 67 111 110 102 105 103 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 115 101 114 118 101 114 115 34 58 123 125 44 34 102 58 115 101 97 114 99 104 101 115 34 58 123 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xgvsd,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xgvsd,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xgvsd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:19:12.088: INFO: The status of Pod dns-4051 is Pending, waiting for it to be Running (with Ready = true) Aug 13 18:19:14.092: INFO: The status of Pod dns-4051 is Pending, waiting for it to be Running (with Ready = true) Aug 13 18:19:16.094: INFO: The status of Pod dns-4051 is Pending, waiting for it to be Running (with Ready = true) Aug 13 18:19:18.158: INFO: The status of Pod dns-4051 is Running (Ready = true) STEP: Verifying customized DNS suffix list is configured on pod... Aug 13 18:19:18.159: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4051 PodName:dns-4051 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 13 18:19:18.159: INFO: >>> kubeConfig: /root/.kube/config I0813 18:19:18.551873 7 log.go:172] (0xc001e1c210) (0xc002403c20) Create stream I0813 18:19:18.551915 7 log.go:172] (0xc001e1c210) (0xc002403c20) Stream added, broadcasting: 1 I0813 18:19:18.554548 7 log.go:172] (0xc001e1c210) Reply frame received for 1 I0813 18:19:18.554588 7 log.go:172] (0xc001e1c210) (0xc0012a9400) Create stream I0813 18:19:18.554598 7 log.go:172] (0xc001e1c210) (0xc0012a9400) Stream added, broadcasting: 3 I0813 18:19:18.555451 7 log.go:172] (0xc001e1c210) Reply frame received for 3 I0813 18:19:18.555479 7 log.go:172] (0xc001e1c210) (0xc002403cc0) Create stream I0813 18:19:18.555489 7 log.go:172] (0xc001e1c210) (0xc002403cc0) Stream added, broadcasting: 5 I0813 18:19:18.556394 7 log.go:172] (0xc001e1c210) Reply frame received for 5 I0813 18:19:18.777933 7 log.go:172] (0xc001e1c210) Data frame received for 3 I0813 18:19:18.777971 7 log.go:172] (0xc0012a9400) (3) Data frame handling I0813 18:19:18.777997 7 log.go:172] (0xc0012a9400) (3) Data frame sent I0813 18:19:18.779784 7 log.go:172] (0xc001e1c210) Data frame received for 3 I0813 18:19:18.779821 7 log.go:172] (0xc0012a9400) (3) Data frame handling I0813 18:19:18.779935 7 log.go:172] (0xc001e1c210) Data frame received for 5 I0813 18:19:18.779973 7 log.go:172] (0xc002403cc0) (5) Data frame handling I0813 18:19:18.781948 7 log.go:172] (0xc001e1c210) Data frame received for 1 I0813 18:19:18.781989 7 log.go:172] (0xc002403c20) (1) Data frame handling I0813 18:19:18.782042 7 log.go:172] (0xc002403c20) (1) Data frame sent I0813 18:19:18.782087 7 log.go:172] (0xc001e1c210) (0xc002403c20) Stream removed, broadcasting: 1 I0813 18:19:18.782214 7 log.go:172] (0xc001e1c210) Go away received I0813 18:19:18.782261 7 log.go:172] (0xc001e1c210) (0xc002403c20) Stream removed, broadcasting: 1 I0813 18:19:18.782292 7 log.go:172] (0xc001e1c210) (0xc0012a9400) Stream removed, broadcasting: 3 I0813 18:19:18.782319 7 log.go:172] (0xc001e1c210) (0xc002403cc0) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... Aug 13 18:19:18.782: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4051 PodName:dns-4051 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 13 18:19:18.782: INFO: >>> kubeConfig: /root/.kube/config I0813 18:19:18.866433 7 log.go:172] (0xc0020ee000) (0xc0012a9720) Create stream I0813 18:19:18.866462 7 log.go:172] (0xc0020ee000) (0xc0012a9720) Stream added, broadcasting: 1 I0813 18:19:18.868177 7 log.go:172] (0xc0020ee000) Reply frame received for 1 I0813 18:19:18.868196 7 log.go:172] (0xc0020ee000) (0xc0012a9860) Create stream I0813 18:19:18.868205 7 log.go:172] (0xc0020ee000) (0xc0012a9860) Stream added, broadcasting: 3 I0813 18:19:18.868964 7 log.go:172] (0xc0020ee000) Reply frame received for 3 I0813 18:19:18.868981 7 log.go:172] (0xc0020ee000) (0xc0012a99a0) Create stream I0813 18:19:18.868987 7 log.go:172] (0xc0020ee000) (0xc0012a99a0) Stream added, broadcasting: 5 I0813 18:19:18.869704 7 log.go:172] (0xc0020ee000) Reply frame received for 5 I0813 18:19:18.937260 7 log.go:172] (0xc0020ee000) Data frame received for 3 I0813 18:19:18.937288 7 log.go:172] (0xc0012a9860) (3) Data frame handling I0813 18:19:18.937301 7 log.go:172] (0xc0012a9860) (3) Data frame sent I0813 18:19:18.939521 7 log.go:172] (0xc0020ee000) Data frame received for 5 I0813 18:19:18.939565 7 log.go:172] (0xc0012a99a0) (5) Data frame handling I0813 18:19:18.939839 7 log.go:172] (0xc0020ee000) Data frame received for 3 I0813 18:19:18.939879 7 log.go:172] (0xc0012a9860) (3) Data frame handling I0813 18:19:18.941492 7 log.go:172] (0xc0020ee000) Data frame received for 1 I0813 18:19:18.941510 7 log.go:172] (0xc0012a9720) (1) Data frame handling I0813 18:19:18.941518 7 log.go:172] (0xc0012a9720) (1) Data frame sent I0813 18:19:18.941529 7 log.go:172] (0xc0020ee000) (0xc0012a9720) Stream removed, broadcasting: 1 I0813 18:19:18.941551 7 log.go:172] (0xc0020ee000) Go away received I0813 18:19:18.941672 7 log.go:172] (0xc0020ee000) (0xc0012a9720) Stream removed, broadcasting: 1 I0813 18:19:18.941693 7 log.go:172] (0xc0020ee000) (0xc0012a9860) Stream removed, broadcasting: 3 I0813 18:19:18.941699 7 log.go:172] (0xc0020ee000) (0xc0012a99a0) Stream removed, broadcasting: 5 Aug 13 18:19:18.941: INFO: Deleting pod dns-4051... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:19:19.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4051" for this suite. • [SLOW TEST:7.929 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":275,"completed":31,"skipped":474,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:19:19.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:19:20.296: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties Aug 13 18:19:23.959: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7358 create -f -' Aug 13 18:19:35.025: INFO: stderr: "" Aug 13 18:19:35.025: INFO: stdout: "e2e-test-crd-publish-openapi-5856-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 13 18:19:35.025: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7358 delete e2e-test-crd-publish-openapi-5856-crds test-foo' Aug 13 18:19:35.219: INFO: stderr: "" Aug 13 18:19:35.219: INFO: stdout: "e2e-test-crd-publish-openapi-5856-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" Aug 13 18:19:35.219: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7358 apply -f -' Aug 13 18:19:35.623: INFO: stderr: "" Aug 13 18:19:35.623: INFO: stdout: "e2e-test-crd-publish-openapi-5856-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" Aug 13 18:19:35.623: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7358 delete e2e-test-crd-publish-openapi-5856-crds test-foo' Aug 13 18:19:35.727: INFO: stderr: "" Aug 13 18:19:35.727: INFO: stdout: "e2e-test-crd-publish-openapi-5856-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema Aug 13 18:19:35.727: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7358 create -f -' Aug 13 18:19:35.961: INFO: rc: 1 Aug 13 18:19:35.961: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7358 apply -f -' Aug 13 18:19:36.305: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties Aug 13 18:19:36.305: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7358 create -f -' Aug 13 18:19:36.584: INFO: rc: 1 Aug 13 18:19:36.584: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7358 apply -f -' Aug 13 18:19:36.848: INFO: rc: 1 STEP: kubectl explain works to explain CR properties Aug 13 18:19:36.848: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5856-crds' Aug 13 18:19:37.143: INFO: stderr: "" Aug 13 18:19:37.143: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5856-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively Aug 13 18:19:37.144: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5856-crds.metadata' Aug 13 18:19:37.441: INFO: stderr: "" Aug 13 18:19:37.441: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5856-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" Aug 13 18:19:37.442: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5856-crds.spec' Aug 13 18:19:37.713: INFO: stderr: "" Aug 13 18:19:37.713: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5856-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" Aug 13 18:19:37.713: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5856-crds.spec.bars' Aug 13 18:19:38.017: INFO: stderr: "" Aug 13 18:19:38.017: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5856-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist Aug 13 18:19:38.017: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5856-crds.spec.bars2' Aug 13 18:19:38.267: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:19:41.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7358" for this suite. • [SLOW TEST:22.020 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":275,"completed":32,"skipped":475,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:19:41.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 13 18:19:42.068: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f8f4689e-7464-47f0-99dc-9f138ff7cab7" in namespace "downward-api-8738" to be "Succeeded or Failed" Aug 13 18:19:42.071: INFO: Pod "downwardapi-volume-f8f4689e-7464-47f0-99dc-9f138ff7cab7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.591337ms Aug 13 18:19:44.128: INFO: Pod "downwardapi-volume-f8f4689e-7464-47f0-99dc-9f138ff7cab7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059217907s Aug 13 18:19:46.174: INFO: Pod "downwardapi-volume-f8f4689e-7464-47f0-99dc-9f138ff7cab7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10565982s Aug 13 18:19:48.600: INFO: Pod "downwardapi-volume-f8f4689e-7464-47f0-99dc-9f138ff7cab7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.531519719s Aug 13 18:19:50.602: INFO: Pod "downwardapi-volume-f8f4689e-7464-47f0-99dc-9f138ff7cab7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.534056932s STEP: Saw pod success Aug 13 18:19:50.603: INFO: Pod "downwardapi-volume-f8f4689e-7464-47f0-99dc-9f138ff7cab7" satisfied condition "Succeeded or Failed" Aug 13 18:19:50.605: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-f8f4689e-7464-47f0-99dc-9f138ff7cab7 container client-container: STEP: delete the pod Aug 13 18:19:50.911: INFO: Waiting for pod downwardapi-volume-f8f4689e-7464-47f0-99dc-9f138ff7cab7 to disappear Aug 13 18:19:50.913: INFO: Pod downwardapi-volume-f8f4689e-7464-47f0-99dc-9f138ff7cab7 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:19:50.913: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8738" for this suite. • [SLOW TEST:9.076 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":33,"skipped":492,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:19:50.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:20:03.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7609" for this suite. STEP: Destroying namespace "nsdeletetest-4662" for this suite. Aug 13 18:20:03.556: INFO: Namespace nsdeletetest-4662 was already deleted STEP: Destroying namespace "nsdeletetest-3704" for this suite. • [SLOW TEST:12.606 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":275,"completed":34,"skipped":507,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:20:03.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 13 18:20:03.637: INFO: Waiting up to 5m0s for pod "pod-23840e16-93c1-4927-8f39-da51bb324db1" in namespace "emptydir-1382" to be "Succeeded or Failed" Aug 13 18:20:03.689: INFO: Pod "pod-23840e16-93c1-4927-8f39-da51bb324db1": Phase="Pending", Reason="", readiness=false. Elapsed: 51.705455ms Aug 13 18:20:05.697: INFO: Pod "pod-23840e16-93c1-4927-8f39-da51bb324db1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059425409s Aug 13 18:20:07.701: INFO: Pod "pod-23840e16-93c1-4927-8f39-da51bb324db1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063852506s Aug 13 18:20:09.994: INFO: Pod "pod-23840e16-93c1-4927-8f39-da51bb324db1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.356642642s Aug 13 18:20:12.009: INFO: Pod "pod-23840e16-93c1-4927-8f39-da51bb324db1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.371940739s STEP: Saw pod success Aug 13 18:20:12.009: INFO: Pod "pod-23840e16-93c1-4927-8f39-da51bb324db1" satisfied condition "Succeeded or Failed" Aug 13 18:20:12.062: INFO: Trying to get logs from node kali-worker pod pod-23840e16-93c1-4927-8f39-da51bb324db1 container test-container: STEP: delete the pod Aug 13 18:20:12.167: INFO: Waiting for pod pod-23840e16-93c1-4927-8f39-da51bb324db1 to disappear Aug 13 18:20:12.175: INFO: Pod pod-23840e16-93c1-4927-8f39-da51bb324db1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:20:12.175: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1382" for this suite. • [SLOW TEST:8.620 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":35,"skipped":512,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:20:12.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready Aug 13 18:20:13.030: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set Aug 13 18:20:15.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732939613, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732939613, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732939613, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732939613, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 13 18:20:17.486: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732939613, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732939613, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732939613, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732939613, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 13 18:20:20.522: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:20:20.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:20:21.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-6001" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137 • [SLOW TEST:9.755 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":275,"completed":36,"skipped":524,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:20:21.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Aug 13 18:20:22.065: INFO: PodSpec: initContainers in spec.initContainers Aug 13 18:21:14.423: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2c6800a8-22e7-4b2a-93ed-eec1dd40b773", GenerateName:"", Namespace:"init-container-1894", SelfLink:"/api/v1/namespaces/init-container-1894/pods/pod-init-2c6800a8-22e7-4b2a-93ed-eec1dd40b773", UID:"72b364a9-68c0-47dc-9178-59e038a84774", ResourceVersion:"9272347", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63732939622, loc:(*time.Location)(0x7b220e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"65020392"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"e2e.test", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00428fb00), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00428fb20)}, v1.ManagedFieldsEntry{Manager:"kubelet", Operation:"Update", APIVersion:"v1", Time:(*v1.Time)(0xc00428fb40), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc00428fb60)}}}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-crvv6", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc004304ac0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-crvv6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-crvv6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.2", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-crvv6", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003620408), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"kali-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000aa80e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0036204c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0036204e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0036204e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0036204ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732939622, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732939622, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732939622, loc:(*time.Location)(0x7b220e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732939622, loc:(*time.Location)(0x7b220e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.13", PodIP:"10.244.2.254", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.254"}}, StartTime:(*v1.Time)(0xc00428fba0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00428fc00), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000aa8230)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://1b1918afc793a951d30295dcc6f8ebbb6ad2cfbdcd4ccd1572ab586dd9fb8a16", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00428fc20), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00428fbc0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.2", ImageID:"", ContainerID:"", Started:(*bool)(0xc00362058f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:21:14.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1894" for this suite. • [SLOW TEST:52.517 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":275,"completed":37,"skipped":560,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:21:14.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:21:14.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6168" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":275,"completed":38,"skipped":573,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:21:14.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:21:14.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3462" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":275,"completed":39,"skipped":584,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:21:14.703: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service nodeport-service with the type=NodePort in namespace services-1879 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-1879 STEP: creating replication controller externalsvc in namespace services-1879 I0813 18:21:15.023112 7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-1879, replica count: 2 I0813 18:21:18.073601 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0813 18:21:21.073828 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0813 18:21:24.074172 7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Aug 13 18:21:24.206: INFO: Creating new exec pod Aug 13 18:21:30.242: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-1879 execpodsx2nm -- /bin/sh -x -c nslookup nodeport-service' Aug 13 18:21:30.472: INFO: stderr: "I0813 18:21:30.385860 763 log.go:172] (0xc00090cb00) (0xc0002a0140) Create stream\nI0813 18:21:30.386054 763 log.go:172] (0xc00090cb00) (0xc0002a0140) Stream added, broadcasting: 1\nI0813 18:21:30.393140 763 log.go:172] (0xc00090cb00) Reply frame received for 1\nI0813 18:21:30.393182 763 log.go:172] (0xc00090cb00) (0xc0006834a0) Create stream\nI0813 18:21:30.393204 763 log.go:172] (0xc00090cb00) (0xc0006834a0) Stream added, broadcasting: 3\nI0813 18:21:30.394542 763 log.go:172] (0xc00090cb00) Reply frame received for 3\nI0813 18:21:30.394576 763 log.go:172] (0xc00090cb00) (0xc000300000) Create stream\nI0813 18:21:30.394587 763 log.go:172] (0xc00090cb00) (0xc000300000) Stream added, broadcasting: 5\nI0813 18:21:30.395343 763 log.go:172] (0xc00090cb00) Reply frame received for 5\nI0813 18:21:30.456374 763 log.go:172] (0xc00090cb00) Data frame received for 5\nI0813 18:21:30.456406 763 log.go:172] (0xc000300000) (5) Data frame handling\nI0813 18:21:30.456450 763 log.go:172] (0xc000300000) (5) Data frame sent\n+ nslookup nodeport-service\nI0813 18:21:30.464110 763 log.go:172] (0xc00090cb00) Data frame received for 3\nI0813 18:21:30.464129 763 log.go:172] (0xc0006834a0) (3) Data frame handling\nI0813 18:21:30.464149 763 log.go:172] (0xc0006834a0) (3) Data frame sent\nI0813 18:21:30.464884 763 log.go:172] (0xc00090cb00) Data frame received for 3\nI0813 18:21:30.464906 763 log.go:172] (0xc0006834a0) (3) Data frame handling\nI0813 18:21:30.464922 763 log.go:172] (0xc0006834a0) (3) Data frame sent\nI0813 18:21:30.465337 763 log.go:172] (0xc00090cb00) Data frame received for 3\nI0813 18:21:30.465360 763 log.go:172] (0xc0006834a0) (3) Data frame handling\nI0813 18:21:30.465377 763 log.go:172] (0xc00090cb00) Data frame received for 5\nI0813 18:21:30.465384 763 log.go:172] (0xc000300000) (5) Data frame handling\nI0813 18:21:30.466716 763 log.go:172] (0xc00090cb00) Data frame received for 1\nI0813 18:21:30.466734 763 log.go:172] (0xc0002a0140) (1) Data frame handling\nI0813 18:21:30.466744 763 log.go:172] (0xc0002a0140) (1) Data frame sent\nI0813 18:21:30.466754 763 log.go:172] (0xc00090cb00) (0xc0002a0140) Stream removed, broadcasting: 1\nI0813 18:21:30.466798 763 log.go:172] (0xc00090cb00) Go away received\nI0813 18:21:30.466967 763 log.go:172] (0xc00090cb00) (0xc0002a0140) Stream removed, broadcasting: 1\nI0813 18:21:30.466979 763 log.go:172] (0xc00090cb00) (0xc0006834a0) Stream removed, broadcasting: 3\nI0813 18:21:30.466986 763 log.go:172] (0xc00090cb00) (0xc000300000) Stream removed, broadcasting: 5\n" Aug 13 18:21:30.472: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-1879.svc.cluster.local\tcanonical name = externalsvc.services-1879.svc.cluster.local.\nName:\texternalsvc.services-1879.svc.cluster.local\nAddress: 10.101.0.62\n\n" STEP: deleting ReplicationController externalsvc in namespace services-1879, will wait for the garbage collector to delete the pods Aug 13 18:21:30.531: INFO: Deleting ReplicationController externalsvc took: 6.516766ms Aug 13 18:21:30.932: INFO: Terminating ReplicationController externalsvc pods took: 400.264118ms Aug 13 18:21:43.946: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:21:44.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1879" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:29.922 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":275,"completed":40,"skipped":605,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:21:44.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-map-65aaa7ce-0d08-4ad4-9eac-da0eb1e1524e STEP: Creating a pod to test consume secrets Aug 13 18:21:45.461: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8657fab8-fa7b-4d42-a30e-0b08f75671f0" in namespace "projected-7769" to be "Succeeded or Failed" Aug 13 18:21:45.515: INFO: Pod "pod-projected-secrets-8657fab8-fa7b-4d42-a30e-0b08f75671f0": Phase="Pending", Reason="", readiness=false. Elapsed: 54.221077ms Aug 13 18:21:47.518: INFO: Pod "pod-projected-secrets-8657fab8-fa7b-4d42-a30e-0b08f75671f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057268226s Aug 13 18:21:49.522: INFO: Pod "pod-projected-secrets-8657fab8-fa7b-4d42-a30e-0b08f75671f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061082292s Aug 13 18:21:51.607: INFO: Pod "pod-projected-secrets-8657fab8-fa7b-4d42-a30e-0b08f75671f0": Phase="Running", Reason="", readiness=true. Elapsed: 6.146503863s Aug 13 18:21:53.611: INFO: Pod "pod-projected-secrets-8657fab8-fa7b-4d42-a30e-0b08f75671f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.150236288s STEP: Saw pod success Aug 13 18:21:53.611: INFO: Pod "pod-projected-secrets-8657fab8-fa7b-4d42-a30e-0b08f75671f0" satisfied condition "Succeeded or Failed" Aug 13 18:21:53.614: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-8657fab8-fa7b-4d42-a30e-0b08f75671f0 container projected-secret-volume-test: STEP: delete the pod Aug 13 18:21:53.662: INFO: Waiting for pod pod-projected-secrets-8657fab8-fa7b-4d42-a30e-0b08f75671f0 to disappear Aug 13 18:21:53.670: INFO: Pod pod-projected-secrets-8657fab8-fa7b-4d42-a30e-0b08f75671f0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:21:53.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7769" for this suite. • [SLOW TEST:9.054 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":41,"skipped":613,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:21:53.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2589, will wait for the garbage collector to delete the pods Aug 13 18:21:59.825: INFO: Deleting Job.batch foo took: 6.460845ms Aug 13 18:22:00.125: INFO: Terminating Job.batch foo pods took: 300.27386ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:22:43.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2589" for this suite. • [SLOW TEST:49.859 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":275,"completed":42,"skipped":631,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:22:43.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:22:43.642: INFO: Creating deployment "webserver-deployment" Aug 13 18:22:43.672: INFO: Waiting for observed generation 1 Aug 13 18:22:45.682: INFO: Waiting for all required pods to come up Aug 13 18:22:45.687: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 13 18:22:55.754: INFO: Waiting for deployment "webserver-deployment" to complete Aug 13 18:22:55.760: INFO: Updating deployment "webserver-deployment" with a non-existent image Aug 13 18:22:55.765: INFO: Updating deployment webserver-deployment Aug 13 18:22:55.765: INFO: Waiting for observed generation 2 Aug 13 18:22:58.512: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 13 18:23:02.034: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 13 18:23:02.100: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 13 18:23:03.029: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 13 18:23:03.029: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 13 18:23:03.073: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Aug 13 18:23:03.255: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Aug 13 18:23:03.255: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Aug 13 18:23:03.632: INFO: Updating deployment webserver-deployment Aug 13 18:23:03.632: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Aug 13 18:23:04.638: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 13 18:23:05.304: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 13 18:23:09.615: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-5469 /apis/apps/v1/namespaces/deployment-5469/deployments/webserver-deployment 4b75b194-23e7-4151-910c-0b772016c70e 9273219 3 2020-08-13 18:22:43 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] [{e2e.test Update apps/v1 2020-08-13 18:23:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-13 18:23:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003b39cf8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-13 18:23:04 +0000 UTC,LastTransitionTime:2020-08-13 18:23:04 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-6676bcd6d4" is progressing.,LastUpdateTime:2020-08-13 18:23:06 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Aug 13 18:23:09.706: INFO: New ReplicaSet "webserver-deployment-6676bcd6d4" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-6676bcd6d4 deployment-5469 /apis/apps/v1/namespaces/deployment-5469/replicasets/webserver-deployment-6676bcd6d4 5cce16fd-5841-4dde-a680-89a68dc16be5 9273213 3 2020-08-13 18:22:55 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 4b75b194-23e7-4151-910c-0b772016c70e 0xc003be3c97 0xc003be3c98}] [] [{kube-controller-manager Update apps/v1 2020-08-13 18:23:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 55 53 98 49 57 52 45 50 51 101 55 45 52 49 53 49 45 57 49 48 99 45 48 98 55 55 50 48 49 54 99 55 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 6676bcd6d4,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003be3d18 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 13 18:23:09.706: INFO: All old ReplicaSets of Deployment "webserver-deployment": Aug 13 18:23:09.706: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-84855cf797 deployment-5469 /apis/apps/v1/namespaces/deployment-5469/replicasets/webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 9273200 3 2020-08-13 18:22:43 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 4b75b194-23e7-4151-910c-0b772016c70e 0xc003be3d77 0xc003be3d78}] [] [{kube-controller-manager Update apps/v1 2020-08-13 18:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 98 55 53 98 49 57 52 45 50 51 101 55 45 52 49 53 49 45 57 49 48 99 45 48 98 55 55 50 48 49 54 99 55 48 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 84855cf797,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003be3de8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Aug 13 18:23:10.790: INFO: Pod "webserver-deployment-6676bcd6d4-546tc" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-546tc webserver-deployment-6676bcd6d4- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-6676bcd6d4-546tc 273efa6e-b80d-4ea3-a2aa-cfc1f02d8e86 9273188 0 2020-08-13 18:23:05 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5cce16fd-5841-4dde-a680-89a68dc16be5 0xc003cb70f7 0xc003cb70f8}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 99 101 49 54 102 100 45 53 56 52 49 45 52 100 100 101 45 97 54 56 48 45 56 57 97 54 56 100 99 49 54 98 101 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.790: INFO: Pod "webserver-deployment-6676bcd6d4-6vsbc" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-6vsbc webserver-deployment-6676bcd6d4- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-6676bcd6d4-6vsbc 1e25d6bb-090d-497c-9c8d-44aa780ebf37 9273135 0 2020-08-13 18:22:55 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5cce16fd-5841-4dde-a680-89a68dc16be5 0xc003cb7237 0xc003cb7238}] [] [{kube-controller-manager Update v1 2020-08-13 18:22:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 99 101 49 54 102 100 45 53 56 52 49 45 52 100 100 101 45 97 54 56 48 45 56 57 97 54 56 100 99 49 54 98 101 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:04 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 52 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.45,StartTime:2020-08-13 18:22:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.45,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.791: INFO: Pod "webserver-deployment-6676bcd6d4-8nwcx" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-8nwcx webserver-deployment-6676bcd6d4- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-6676bcd6d4-8nwcx 02a29e71-2e99-4e91-aefa-ab449c7f9f0d 9273230 0 2020-08-13 18:23:04 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5cce16fd-5841-4dde-a680-89a68dc16be5 0xc003cb7417 0xc003cb7418}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 99 101 49 54 102 100 45 53 56 52 49 45 52 100 100 101 45 97 54 56 48 45 56 57 97 54 56 100 99 49 54 98 101 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:07 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-13 18:23:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.791: INFO: Pod "webserver-deployment-6676bcd6d4-b4759" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-b4759 webserver-deployment-6676bcd6d4- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-6676bcd6d4-b4759 36181f5b-314c-4455-b68c-999076cd7169 9273193 0 2020-08-13 18:23:05 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5cce16fd-5841-4dde-a680-89a68dc16be5 0xc003cb75c7 0xc003cb75c8}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 99 101 49 54 102 100 45 53 56 52 49 45 52 100 100 101 45 97 54 56 48 45 56 57 97 54 56 100 99 49 54 98 101 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.791: INFO: Pod "webserver-deployment-6676bcd6d4-bw7pj" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-bw7pj webserver-deployment-6676bcd6d4- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-6676bcd6d4-bw7pj cfa58ce1-c985-48aa-b28e-e44753b4ec62 9273217 0 2020-08-13 18:22:55 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5cce16fd-5841-4dde-a680-89a68dc16be5 0xc003cb7707 0xc003cb7708}] [] [{kube-controller-manager Update v1 2020-08-13 18:22:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 99 101 49 54 102 100 45 53 56 52 49 45 52 100 100 101 45 97 54 56 48 45 56 57 97 54 56 100 99 49 54 98 101 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:57 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:57 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:56 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.15,StartTime:2020-08-13 18:22:57 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.792: INFO: Pod "webserver-deployment-6676bcd6d4-fc85h" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-fc85h webserver-deployment-6676bcd6d4- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-6676bcd6d4-fc85h 18edafb8-3fc5-4d52-a258-65a2481cddad 9273183 0 2020-08-13 18:23:05 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5cce16fd-5841-4dde-a680-89a68dc16be5 0xc003cb78e7 0xc003cb78e8}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 99 101 49 54 102 100 45 53 56 52 49 45 52 100 100 101 45 97 54 56 48 45 56 57 97 54 56 100 99 49 54 98 101 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.792: INFO: Pod "webserver-deployment-6676bcd6d4-ffxsc" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-ffxsc webserver-deployment-6676bcd6d4- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-6676bcd6d4-ffxsc e5f73be1-1e31-4831-b384-f78934759242 9273226 0 2020-08-13 18:22:58 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5cce16fd-5841-4dde-a680-89a68dc16be5 0xc003cb7a27 0xc003cb7a28}] [] [{kube-controller-manager Update v1 2020-08-13 18:22:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 99 101 49 54 102 100 45 53 56 52 49 45 52 100 100 101 45 97 54 56 48 45 56 57 97 54 56 100 99 49 54 98 101 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:07 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 52 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.46,StartTime:2020-08-13 18:22:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.46,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.792: INFO: Pod "webserver-deployment-6676bcd6d4-lxrp6" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-lxrp6 webserver-deployment-6676bcd6d4- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-6676bcd6d4-lxrp6 1126c5f7-8e8d-4b34-b8d8-45896408efa5 9273187 0 2020-08-13 18:23:05 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5cce16fd-5841-4dde-a680-89a68dc16be5 0xc003cb7c07 0xc003cb7c08}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 99 101 49 54 102 100 45 53 56 52 49 45 52 100 100 101 45 97 54 56 48 45 56 57 97 54 56 100 99 49 54 98 101 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.793: INFO: Pod "webserver-deployment-6676bcd6d4-p2qck" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-p2qck webserver-deployment-6676bcd6d4- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-6676bcd6d4-p2qck 5e929cf1-fdeb-43c4-a7e3-7ba6084bd9e9 9273245 0 2020-08-13 18:23:05 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5cce16fd-5841-4dde-a680-89a68dc16be5 0xc003cb7d47 0xc003cb7d48}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 99 101 49 54 102 100 45 53 56 52 49 45 52 100 100 101 45 97 54 56 48 45 56 57 97 54 56 100 99 49 54 98 101 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-13 18:23:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.793: INFO: Pod "webserver-deployment-6676bcd6d4-rsbrw" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-rsbrw webserver-deployment-6676bcd6d4- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-6676bcd6d4-rsbrw b46074e5-cf56-4db9-875b-c8df12640b3e 9273099 0 2020-08-13 18:22:58 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5cce16fd-5841-4dde-a680-89a68dc16be5 0xc003cb7ef7 0xc003cb7ef8}] [] [{kube-controller-manager Update v1 2020-08-13 18:22:58 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 99 101 49 54 102 100 45 53 56 52 49 45 52 100 100 101 45 97 54 56 48 45 56 57 97 54 56 100 99 49 54 98 101 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:00 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:59 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-13 18:22:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.793: INFO: Pod "webserver-deployment-6676bcd6d4-sr566" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-sr566 webserver-deployment-6676bcd6d4- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-6676bcd6d4-sr566 9f403b30-cf28-48cd-804e-27d3e80f9fb2 9273120 0 2020-08-13 18:22:55 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5cce16fd-5841-4dde-a680-89a68dc16be5 0xc003d300a7 0xc003d300a8}] [] [{kube-controller-manager Update v1 2020-08-13 18:22:55 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 99 101 49 54 102 100 45 53 56 52 49 45 52 100 100 101 45 97 54 56 48 45 56 57 97 54 56 100 99 49 54 98 101 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:56 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:56 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.14,StartTime:2020-08-13 18:22:56 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.794: INFO: Pod "webserver-deployment-6676bcd6d4-tv66w" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-tv66w webserver-deployment-6676bcd6d4- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-6676bcd6d4-tv66w c2bb848d-e0a6-4374-8861-c9361b67c492 9273241 0 2020-08-13 18:23:04 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5cce16fd-5841-4dde-a680-89a68dc16be5 0xc003d30287 0xc003d30288}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 99 101 49 54 102 100 45 53 56 52 49 45 52 100 100 101 45 97 54 56 48 45 56 57 97 54 56 100 99 49 54 98 101 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-13 18:23:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.794: INFO: Pod "webserver-deployment-6676bcd6d4-v6dks" is not available: &Pod{ObjectMeta:{webserver-deployment-6676bcd6d4-v6dks webserver-deployment-6676bcd6d4- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-6676bcd6d4-v6dks 36e92368-f0fe-46f9-8b43-f47f0744f201 9273209 0 2020-08-13 18:23:04 +0000 UTC map[name:httpd pod-template-hash:6676bcd6d4] map[] [{apps/v1 ReplicaSet webserver-deployment-6676bcd6d4 5cce16fd-5841-4dde-a680-89a68dc16be5 0xc003d30437 0xc003d30438}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 53 99 99 101 49 54 102 100 45 53 56 52 49 45 52 100 100 101 45 97 54 56 48 45 56 57 97 54 56 100 99 49 54 98 101 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-13 18:23:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.794: INFO: Pod "webserver-deployment-84855cf797-5f6n8" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-5f6n8 webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-5f6n8 b2b5ba6d-89a6-4905-b016-ac8c74601033 9272985 0 2020-08-13 18:22:43 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d305e7 0xc003d305e8}] [] [{kube-controller-manager Update v1 2020-08-13 18:22:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:22:52 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 52 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.40,StartTime:2020-08-13 18:22:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-13 18:22:51 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://1177a67b5f30dfb9f8fc0ff1f2599d8055b30e94fcf0022985272959a5f17548,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.40,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.794: INFO: Pod "webserver-deployment-84855cf797-8xtn4" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-8xtn4 webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-8xtn4 5f752b07-7cf0-42b1-993d-8b647236f0ed 9273184 0 2020-08-13 18:23:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d30797 0xc003d30798}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.795: INFO: Pod "webserver-deployment-84855cf797-b6t57" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-b6t57 webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-b6t57 a7891ef4-ffd8-40ba-8a0d-48852ce18fdf 9273186 0 2020-08-13 18:23:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d308c7 0xc003d308c8}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.795: INFO: Pod "webserver-deployment-84855cf797-fcs8q" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fcs8q webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-fcs8q 85ac021a-1665-4095-8608-60eca8373e33 9273185 0 2020-08-13 18:23:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d309f7 0xc003d309f8}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.795: INFO: Pod "webserver-deployment-84855cf797-fjxns" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-fjxns webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-fjxns ec7b6eff-c78a-43c3-89d6-7c6f415dc4f6 9273000 0 2020-08-13 18:22:43 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d30b27 0xc003d30b28}] [] [{kube-controller-manager Update v1 2020-08-13 18:22:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:22:54 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.10,StartTime:2020-08-13 18:22:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-13 18:22:53 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://a65c09757492d9b59d07966f8499bd66265643e97aa30cc517d735062ae89f83,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.10,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.795: INFO: Pod "webserver-deployment-84855cf797-h5x8g" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-h5x8g webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-h5x8g d7122b40-db8e-4cba-844f-9c8371a03171 9273228 0 2020-08-13 18:23:04 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d30cd7 0xc003d30cd8}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:07 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-13 18:23:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.796: INFO: Pod "webserver-deployment-84855cf797-jc8lb" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jc8lb webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-jc8lb b865c611-77e1-475d-959c-0c193cd92212 9273027 0 2020-08-13 18:22:43 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d30e67 0xc003d30e68}] [] [{kube-controller-manager Update v1 2020-08-13 18:22:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:22:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 52 52 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.44,StartTime:2020-08-13 18:22:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-13 18:22:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://c5007e9332eaf66c75e7b359a5602a9b25b58e406eb1a4f1ee976a3cf2ca64b3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.796: INFO: Pod "webserver-deployment-84855cf797-jc9wk" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jc9wk webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-jc9wk 0dd2b828-96e0-4163-873e-e9d07024b20f 9272968 0 2020-08-13 18:22:43 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d31017 0xc003d31018}] [] [{kube-controller-manager Update v1 2020-08-13 18:22:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:22:50 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:49 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.9,StartTime:2020-08-13 18:22:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-13 18:22:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f1d48ba1784243e3cb665ef8d22630cd369b91e1f3089a26c495b63f19267ca7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.796: INFO: Pod "webserver-deployment-84855cf797-jp6sj" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-jp6sj webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-jp6sj 38b3ee65-4c76-4f5f-8006-b429a28aed5b 9273248 0 2020-08-13 18:23:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d311d7 0xc003d311d8}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-13 18:23:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.796: INFO: Pod "webserver-deployment-84855cf797-m7f6s" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-m7f6s webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-m7f6s 07e5b631-8b11-4029-bbb2-b49ee79e0b8e 9273220 0 2020-08-13 18:23:04 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d31367 0xc003d31368}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:06 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-13 18:23:05 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.797: INFO: Pod "webserver-deployment-84855cf797-nbck2" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-nbck2 webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-nbck2 4f0af3b4-836c-4d42-83ee-8501ba699f03 9273022 0 2020-08-13 18:22:43 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d314f7 0xc003d314f8}] [] [{kube-controller-manager Update v1 2020-08-13 18:22:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:22:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.11,StartTime:2020-08-13 18:22:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-13 18:22:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ad49fc3b680e4131c8cbd334081f681649370ac9bf4a3cd106765c2b9353afdb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.11,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.797: INFO: Pod "webserver-deployment-84855cf797-q4hd2" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-q4hd2 webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-q4hd2 f70049ad-d204-4be7-91b3-51f4ea9b7761 9273194 0 2020-08-13 18:23:04 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d316a7 0xc003d316a8}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:05 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:04 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-13 18:23:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.797: INFO: Pod "webserver-deployment-84855cf797-r6bgd" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-r6bgd webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-r6bgd 09bb3872-103b-4f89-b73c-16a6f929e280 9273238 0 2020-08-13 18:23:04 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d31837 0xc003d31838}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-13 18:23:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.797: INFO: Pod "webserver-deployment-84855cf797-rzt8t" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-rzt8t webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-rzt8t a9800743-d39a-47f9-b4d2-4313c424542d 9273025 0 2020-08-13 18:22:43 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d319c7 0xc003d319c8}] [] [{kube-controller-manager Update v1 2020-08-13 18:22:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:22:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.13,StartTime:2020-08-13 18:22:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-13 18:22:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://084294d7706d2af905821328bd214ae8863240403387af306bcb18804415557a,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.798: INFO: Pod "webserver-deployment-84855cf797-sfdvm" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-sfdvm webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-sfdvm fb263e40-75e2-4b4d-99b6-d0ebea28aed0 9273231 0 2020-08-13 18:23:04 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d31b77 0xc003d31b78}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:07 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-13 18:23:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.798: INFO: Pod "webserver-deployment-84855cf797-srxzb" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-srxzb webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-srxzb 53dd4e8b-a3a8-42c6-b0fa-792460616e67 9273037 0 2020-08-13 18:22:43 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d31d07 0xc003d31d08}] [] [{kube-controller-manager Update v1 2020-08-13 18:22:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:22:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 52 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.42,StartTime:2020-08-13 18:22:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-13 18:22:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ce8cfe244deddf9eaff3851b494cb3ace8ff0b1119e85e26d7f828ca94975882,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.42,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.798: INFO: Pod "webserver-deployment-84855cf797-stlzz" is available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-stlzz webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-stlzz 143b0cdc-61df-4d01-a19f-175c46c423d0 9273038 0 2020-08-13 18:22:43 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d31eb7 0xc003d31eb8}] [] [{kube-controller-manager Update v1 2020-08-13 18:22:43 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:22:55 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 49 46 52 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:22:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:10.244.1.43,StartTime:2020-08-13 18:22:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-13 18:22:54 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://cfa2acb0c4e6c852afc299801c544529aa15e871628c64d234a796d9d15f5ba6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.43,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.798: INFO: Pod "webserver-deployment-84855cf797-xs87v" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-xs87v webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-xs87v 5a2f1255-aa1a-4808-9384-fca5008b111d 9273240 0 2020-08-13 18:23:04 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d50067 0xc003d50068}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:08 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:06 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-13 18:23:06 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.799: INFO: Pod "webserver-deployment-84855cf797-zss58" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-zss58 webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-zss58 968bf004-4607-4af4-bd31-e5e290c6a329 9273243 0 2020-08-13 18:23:04 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d501f7 0xc003d501f8}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:04 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:23:09 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.15,PodIP:,StartTime:2020-08-13 18:23:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:23:10.799: INFO: Pod "webserver-deployment-84855cf797-zwptd" is not available: &Pod{ObjectMeta:{webserver-deployment-84855cf797-zwptd webserver-deployment-84855cf797- deployment-5469 /api/v1/namespaces/deployment-5469/pods/webserver-deployment-84855cf797-zwptd 1e5e2d9a-6ed4-4ee9-9622-4f1bf40629c1 9273189 0 2020-08-13 18:23:05 +0000 UTC map[name:httpd pod-template-hash:84855cf797] map[] [{apps/v1 ReplicaSet webserver-deployment-84855cf797 ef4ddde7-9115-42a1-8e01-5dc0c4c49d95 0xc003d50387 0xc003d50388}] [] [{kube-controller-manager Update v1 2020-08-13 18:23:05 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 101 102 52 100 100 100 101 55 45 57 49 49 53 45 52 50 97 49 45 56 101 48 49 45 53 100 99 48 99 52 99 52 57 100 57 53 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-5bx24,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-5bx24,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-5bx24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:23:05 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:23:10.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5469" for this suite. • [SLOW TEST:28.665 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":275,"completed":43,"skipped":641,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:23:12.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted Aug 13 18:23:31.342: INFO: 5 pods remaining Aug 13 18:23:31.342: INFO: 5 pods has nil DeletionTimestamp Aug 13 18:23:31.342: INFO: STEP: Gathering metrics W0813 18:23:36.513446 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 13 18:23:36.513: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:23:36.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1158" for this suite. • [SLOW TEST:25.324 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":275,"completed":44,"skipped":660,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:23:37.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service multi-endpoint-test in namespace services-4856 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4856 to expose endpoints map[] Aug 13 18:23:39.370: INFO: successfully validated that service multi-endpoint-test in namespace services-4856 exposes endpoints map[] (311.758904ms elapsed) STEP: Creating pod pod1 in namespace services-4856 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4856 to expose endpoints map[pod1:[100]] Aug 13 18:23:45.639: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (6.116668689s elapsed, will retry) Aug 13 18:23:47.135: INFO: successfully validated that service multi-endpoint-test in namespace services-4856 exposes endpoints map[pod1:[100]] (7.612522456s elapsed) STEP: Creating pod pod2 in namespace services-4856 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4856 to expose endpoints map[pod1:[100] pod2:[101]] Aug 13 18:23:56.952: INFO: Unexpected endpoints: found map[ae05bb6d-c08a-4796-acef-618ce560f129:[100]], expected map[pod1:[100] pod2:[101]] (9.589243404s elapsed, will retry) Aug 13 18:23:58.585: INFO: successfully validated that service multi-endpoint-test in namespace services-4856 exposes endpoints map[pod1:[100] pod2:[101]] (11.222466084s elapsed) STEP: Deleting pod pod1 in namespace services-4856 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4856 to expose endpoints map[pod2:[101]] Aug 13 18:23:59.045: INFO: successfully validated that service multi-endpoint-test in namespace services-4856 exposes endpoints map[pod2:[101]] (197.994202ms elapsed) STEP: Deleting pod pod2 in namespace services-4856 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4856 to expose endpoints map[] Aug 13 18:23:59.584: INFO: successfully validated that service multi-endpoint-test in namespace services-4856 exposes endpoints map[] (65.578481ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:24:00.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4856" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:23.287 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":275,"completed":45,"skipped":668,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:24:00.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1238 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1238 I0813 18:24:02.490073 7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-1238, replica count: 2 I0813 18:24:05.540518 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0813 18:24:08.540849 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0813 18:24:11.541074 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0813 18:24:14.541292 7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 13 18:24:14.541: INFO: Creating new exec pod Aug 13 18:24:22.451: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-1238 execpodq82ct -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Aug 13 18:24:22.659: INFO: stderr: "I0813 18:24:22.584272 785 log.go:172] (0xc0003c9d90) (0xc0006a7540) Create stream\nI0813 18:24:22.584325 785 log.go:172] (0xc0003c9d90) (0xc0006a7540) Stream added, broadcasting: 1\nI0813 18:24:22.587048 785 log.go:172] (0xc0003c9d90) Reply frame received for 1\nI0813 18:24:22.587106 785 log.go:172] (0xc0003c9d90) (0xc00079a000) Create stream\nI0813 18:24:22.587138 785 log.go:172] (0xc0003c9d90) (0xc00079a000) Stream added, broadcasting: 3\nI0813 18:24:22.588359 785 log.go:172] (0xc0003c9d90) Reply frame received for 3\nI0813 18:24:22.588396 785 log.go:172] (0xc0003c9d90) (0xc00079a140) Create stream\nI0813 18:24:22.588409 785 log.go:172] (0xc0003c9d90) (0xc00079a140) Stream added, broadcasting: 5\nI0813 18:24:22.589635 785 log.go:172] (0xc0003c9d90) Reply frame received for 5\nI0813 18:24:22.649524 785 log.go:172] (0xc0003c9d90) Data frame received for 5\nI0813 18:24:22.649546 785 log.go:172] (0xc00079a140) (5) Data frame handling\nI0813 18:24:22.649561 785 log.go:172] (0xc00079a140) (5) Data frame sent\nI0813 18:24:22.649569 785 log.go:172] (0xc0003c9d90) Data frame received for 5\nI0813 18:24:22.649576 785 log.go:172] (0xc00079a140) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0813 18:24:22.649595 785 log.go:172] (0xc00079a140) (5) Data frame sent\nI0813 18:24:22.649785 785 log.go:172] (0xc0003c9d90) Data frame received for 5\nI0813 18:24:22.649822 785 log.go:172] (0xc00079a140) (5) Data frame handling\nI0813 18:24:22.649924 785 log.go:172] (0xc0003c9d90) Data frame received for 3\nI0813 18:24:22.649953 785 log.go:172] (0xc00079a000) (3) Data frame handling\nI0813 18:24:22.651443 785 log.go:172] (0xc0003c9d90) Data frame received for 1\nI0813 18:24:22.651475 785 log.go:172] (0xc0006a7540) (1) Data frame handling\nI0813 18:24:22.651496 785 log.go:172] (0xc0006a7540) (1) Data frame sent\nI0813 18:24:22.651637 785 log.go:172] (0xc0003c9d90) (0xc0006a7540) Stream removed, broadcasting: 1\nI0813 18:24:22.651738 785 log.go:172] (0xc0003c9d90) Go away received\nI0813 18:24:22.651938 785 log.go:172] (0xc0003c9d90) (0xc0006a7540) Stream removed, broadcasting: 1\nI0813 18:24:22.651961 785 log.go:172] (0xc0003c9d90) (0xc00079a000) Stream removed, broadcasting: 3\nI0813 18:24:22.651974 785 log.go:172] (0xc0003c9d90) (0xc00079a140) Stream removed, broadcasting: 5\n" Aug 13 18:24:22.659: INFO: stdout: "" Aug 13 18:24:22.660: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-1238 execpodq82ct -- /bin/sh -x -c nc -zv -t -w 2 10.97.132.156 80' Aug 13 18:24:22.865: INFO: stderr: "I0813 18:24:22.792637 807 log.go:172] (0xc00003a0b0) (0xc0003161e0) Create stream\nI0813 18:24:22.792692 807 log.go:172] (0xc00003a0b0) (0xc0003161e0) Stream added, broadcasting: 1\nI0813 18:24:22.794141 807 log.go:172] (0xc00003a0b0) Reply frame received for 1\nI0813 18:24:22.794170 807 log.go:172] (0xc00003a0b0) (0xc0003165a0) Create stream\nI0813 18:24:22.794177 807 log.go:172] (0xc00003a0b0) (0xc0003165a0) Stream added, broadcasting: 3\nI0813 18:24:22.795083 807 log.go:172] (0xc00003a0b0) Reply frame received for 3\nI0813 18:24:22.795109 807 log.go:172] (0xc00003a0b0) (0xc000316820) Create stream\nI0813 18:24:22.795118 807 log.go:172] (0xc00003a0b0) (0xc000316820) Stream added, broadcasting: 5\nI0813 18:24:22.795989 807 log.go:172] (0xc00003a0b0) Reply frame received for 5\nI0813 18:24:22.858127 807 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0813 18:24:22.858188 807 log.go:172] (0xc000316820) (5) Data frame handling\nI0813 18:24:22.858211 807 log.go:172] (0xc000316820) (5) Data frame sent\nI0813 18:24:22.858223 807 log.go:172] (0xc00003a0b0) Data frame received for 5\nI0813 18:24:22.858233 807 log.go:172] (0xc000316820) (5) Data frame handling\n+ nc -zv -t -w 2 10.97.132.156 80\nConnection to 10.97.132.156 80 port [tcp/http] succeeded!\nI0813 18:24:22.858367 807 log.go:172] (0xc00003a0b0) Data frame received for 3\nI0813 18:24:22.858393 807 log.go:172] (0xc0003165a0) (3) Data frame handling\nI0813 18:24:22.859519 807 log.go:172] (0xc00003a0b0) Data frame received for 1\nI0813 18:24:22.859532 807 log.go:172] (0xc0003161e0) (1) Data frame handling\nI0813 18:24:22.859549 807 log.go:172] (0xc0003161e0) (1) Data frame sent\nI0813 18:24:22.859636 807 log.go:172] (0xc00003a0b0) (0xc0003161e0) Stream removed, broadcasting: 1\nI0813 18:24:22.859738 807 log.go:172] (0xc00003a0b0) Go away received\nI0813 18:24:22.859948 807 log.go:172] (0xc00003a0b0) (0xc0003161e0) Stream removed, broadcasting: 1\nI0813 18:24:22.859962 807 log.go:172] (0xc00003a0b0) (0xc0003165a0) Stream removed, broadcasting: 3\nI0813 18:24:22.859970 807 log.go:172] (0xc00003a0b0) (0xc000316820) Stream removed, broadcasting: 5\n" Aug 13 18:24:22.865: INFO: stdout: "" Aug 13 18:24:22.865: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:24:22.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1238" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:22.181 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":275,"completed":46,"skipped":707,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:24:22.998: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-projected-all-test-volume-4d85667b-417a-4f89-aea6-4116a71cb65e STEP: Creating secret with name secret-projected-all-test-volume-f35e0a98-0cba-49dd-8c3b-49e6d223fa8d STEP: Creating a pod to test Check all projections for projected volume plugin Aug 13 18:24:23.210: INFO: Waiting up to 5m0s for pod "projected-volume-94f221bb-d5ac-4034-b0f5-3eae90990067" in namespace "projected-1332" to be "Succeeded or Failed" Aug 13 18:24:23.245: INFO: Pod "projected-volume-94f221bb-d5ac-4034-b0f5-3eae90990067": Phase="Pending", Reason="", readiness=false. Elapsed: 35.560401ms Aug 13 18:24:25.249: INFO: Pod "projected-volume-94f221bb-d5ac-4034-b0f5-3eae90990067": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03971157s Aug 13 18:24:27.274: INFO: Pod "projected-volume-94f221bb-d5ac-4034-b0f5-3eae90990067": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063962865s Aug 13 18:24:29.286: INFO: Pod "projected-volume-94f221bb-d5ac-4034-b0f5-3eae90990067": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075834319s STEP: Saw pod success Aug 13 18:24:29.286: INFO: Pod "projected-volume-94f221bb-d5ac-4034-b0f5-3eae90990067" satisfied condition "Succeeded or Failed" Aug 13 18:24:29.288: INFO: Trying to get logs from node kali-worker pod projected-volume-94f221bb-d5ac-4034-b0f5-3eae90990067 container projected-all-volume-test: STEP: delete the pod Aug 13 18:24:29.746: INFO: Waiting for pod projected-volume-94f221bb-d5ac-4034-b0f5-3eae90990067 to disappear Aug 13 18:24:29.799: INFO: Pod projected-volume-94f221bb-d5ac-4034-b0f5-3eae90990067 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:24:29.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1332" for this suite. • [SLOW TEST:6.876 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":275,"completed":47,"skipped":719,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:24:29.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 13 18:24:30.054: INFO: Waiting up to 5m0s for pod "pod-dc071351-5bab-4d0b-99eb-a117cb0d3617" in namespace "emptydir-2453" to be "Succeeded or Failed" Aug 13 18:24:30.173: INFO: Pod "pod-dc071351-5bab-4d0b-99eb-a117cb0d3617": Phase="Pending", Reason="", readiness=false. Elapsed: 118.778466ms Aug 13 18:24:32.237: INFO: Pod "pod-dc071351-5bab-4d0b-99eb-a117cb0d3617": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183495273s Aug 13 18:24:34.242: INFO: Pod "pod-dc071351-5bab-4d0b-99eb-a117cb0d3617": Phase="Pending", Reason="", readiness=false. Elapsed: 4.187586369s Aug 13 18:24:36.245: INFO: Pod "pod-dc071351-5bab-4d0b-99eb-a117cb0d3617": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.190738611s STEP: Saw pod success Aug 13 18:24:36.245: INFO: Pod "pod-dc071351-5bab-4d0b-99eb-a117cb0d3617" satisfied condition "Succeeded or Failed" Aug 13 18:24:36.247: INFO: Trying to get logs from node kali-worker pod pod-dc071351-5bab-4d0b-99eb-a117cb0d3617 container test-container: STEP: delete the pod Aug 13 18:24:36.556: INFO: Waiting for pod pod-dc071351-5bab-4d0b-99eb-a117cb0d3617 to disappear Aug 13 18:24:36.716: INFO: Pod pod-dc071351-5bab-4d0b-99eb-a117cb0d3617 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:24:36.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2453" for this suite. • [SLOW TEST:6.893 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":48,"skipped":730,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:24:36.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 13 18:24:49.085: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 13 18:24:49.204: INFO: Pod pod-with-poststart-exec-hook still exists Aug 13 18:24:51.204: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 13 18:24:51.389: INFO: Pod pod-with-poststart-exec-hook still exists Aug 13 18:24:53.204: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 13 18:24:53.610: INFO: Pod pod-with-poststart-exec-hook still exists Aug 13 18:24:55.204: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 13 18:24:55.227: INFO: Pod pod-with-poststart-exec-hook still exists Aug 13 18:24:57.204: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 13 18:24:57.316: INFO: Pod pod-with-poststart-exec-hook still exists Aug 13 18:24:59.204: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 13 18:24:59.209: INFO: Pod pod-with-poststart-exec-hook still exists Aug 13 18:25:01.204: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 13 18:25:01.208: INFO: Pod pod-with-poststart-exec-hook still exists Aug 13 18:25:03.204: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 13 18:25:03.220: INFO: Pod pod-with-poststart-exec-hook still exists Aug 13 18:25:05.204: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 13 18:25:05.209: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:25:05.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7626" for this suite. • [SLOW TEST:28.452 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":275,"completed":49,"skipped":740,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:25:05.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:25:05.311: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 13 18:25:08.280: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9693 create -f -' Aug 13 18:25:13.706: INFO: stderr: "" Aug 13 18:25:13.706: INFO: stdout: "e2e-test-crd-publish-openapi-9332-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 13 18:25:13.707: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9693 delete e2e-test-crd-publish-openapi-9332-crds test-cr' Aug 13 18:25:13.812: INFO: stderr: "" Aug 13 18:25:13.812: INFO: stdout: "e2e-test-crd-publish-openapi-9332-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Aug 13 18:25:13.812: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9693 apply -f -' Aug 13 18:25:14.105: INFO: stderr: "" Aug 13 18:25:14.105: INFO: stdout: "e2e-test-crd-publish-openapi-9332-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Aug 13 18:25:14.106: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9693 delete e2e-test-crd-publish-openapi-9332-crds test-cr' Aug 13 18:25:14.203: INFO: stderr: "" Aug 13 18:25:14.203: INFO: stdout: "e2e-test-crd-publish-openapi-9332-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Aug 13 18:25:14.203: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9332-crds' Aug 13 18:25:14.482: INFO: stderr: "" Aug 13 18:25:14.482: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-9332-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:25:16.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9693" for this suite. • [SLOW TEST:11.204 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":275,"completed":50,"skipped":747,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:25:16.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-fb53d391-cac7-4930-b3b8-8c342d39366c STEP: Creating a pod to test consume secrets Aug 13 18:25:16.527: INFO: Waiting up to 5m0s for pod "pod-secrets-d0c5986f-a9c3-44d8-bf45-7ce11ac7244c" in namespace "secrets-6336" to be "Succeeded or Failed" Aug 13 18:25:16.548: INFO: Pod "pod-secrets-d0c5986f-a9c3-44d8-bf45-7ce11ac7244c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.362525ms Aug 13 18:25:18.585: INFO: Pod "pod-secrets-d0c5986f-a9c3-44d8-bf45-7ce11ac7244c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058128031s Aug 13 18:25:20.593: INFO: Pod "pod-secrets-d0c5986f-a9c3-44d8-bf45-7ce11ac7244c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066172769s Aug 13 18:25:22.941: INFO: Pod "pod-secrets-d0c5986f-a9c3-44d8-bf45-7ce11ac7244c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.414077956s STEP: Saw pod success Aug 13 18:25:22.941: INFO: Pod "pod-secrets-d0c5986f-a9c3-44d8-bf45-7ce11ac7244c" satisfied condition "Succeeded or Failed" Aug 13 18:25:22.944: INFO: Trying to get logs from node kali-worker pod pod-secrets-d0c5986f-a9c3-44d8-bf45-7ce11ac7244c container secret-volume-test: STEP: delete the pod Aug 13 18:25:23.420: INFO: Waiting for pod pod-secrets-d0c5986f-a9c3-44d8-bf45-7ce11ac7244c to disappear Aug 13 18:25:23.491: INFO: Pod pod-secrets-d0c5986f-a9c3-44d8-bf45-7ce11ac7244c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:25:23.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6336" for this suite. • [SLOW TEST:7.092 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":51,"skipped":755,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:25:23.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 13 18:25:23.830: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5208 /api/v1/namespaces/watch-5208/configmaps/e2e-watch-test-label-changed 2da2b9d9-cad3-4ebe-9130-ada6c56c18ca 9274535 0 2020-08-13 18:25:23 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-13 18:25:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 13 18:25:23.830: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5208 /api/v1/namespaces/watch-5208/configmaps/e2e-watch-test-label-changed 2da2b9d9-cad3-4ebe-9130-ada6c56c18ca 9274537 0 2020-08-13 18:25:23 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-13 18:25:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 13 18:25:23.830: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5208 /api/v1/namespaces/watch-5208/configmaps/e2e-watch-test-label-changed 2da2b9d9-cad3-4ebe-9130-ada6c56c18ca 9274538 0 2020-08-13 18:25:23 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-13 18:25:23 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 13 18:25:33.888: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5208 /api/v1/namespaces/watch-5208/configmaps/e2e-watch-test-label-changed 2da2b9d9-cad3-4ebe-9130-ada6c56c18ca 9274610 0 2020-08-13 18:25:23 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-13 18:25:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 13 18:25:33.888: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5208 /api/v1/namespaces/watch-5208/configmaps/e2e-watch-test-label-changed 2da2b9d9-cad3-4ebe-9130-ada6c56c18ca 9274611 0 2020-08-13 18:25:23 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-13 18:25:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 13 18:25:33.888: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5208 /api/v1/namespaces/watch-5208/configmaps/e2e-watch-test-label-changed 2da2b9d9-cad3-4ebe-9130-ada6c56c18ca 9274612 0 2020-08-13 18:25:23 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] [{e2e.test Update v1 2020-08-13 18:25:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:25:33.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5208" for this suite. • [SLOW TEST:10.400 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":275,"completed":52,"skipped":802,"failed":0} SSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:25:33.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5770.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5770.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5770.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5770.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 13 18:25:40.155: INFO: DNS probes using dns-test-86a81925-b9d6-45b1-be0e-da3ae3917f23 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5770.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5770.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5770.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5770.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 13 18:25:48.313: INFO: File wheezy_udp@dns-test-service-3.dns-5770.svc.cluster.local from pod dns-5770/dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 13 18:25:48.320: INFO: File jessie_udp@dns-test-service-3.dns-5770.svc.cluster.local from pod dns-5770/dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 13 18:25:48.320: INFO: Lookups using dns-5770/dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c failed for: [wheezy_udp@dns-test-service-3.dns-5770.svc.cluster.local jessie_udp@dns-test-service-3.dns-5770.svc.cluster.local] Aug 13 18:25:53.325: INFO: File wheezy_udp@dns-test-service-3.dns-5770.svc.cluster.local from pod dns-5770/dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 13 18:25:53.329: INFO: File jessie_udp@dns-test-service-3.dns-5770.svc.cluster.local from pod dns-5770/dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 13 18:25:53.329: INFO: Lookups using dns-5770/dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c failed for: [wheezy_udp@dns-test-service-3.dns-5770.svc.cluster.local jessie_udp@dns-test-service-3.dns-5770.svc.cluster.local] Aug 13 18:25:58.325: INFO: File wheezy_udp@dns-test-service-3.dns-5770.svc.cluster.local from pod dns-5770/dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 13 18:25:58.328: INFO: File jessie_udp@dns-test-service-3.dns-5770.svc.cluster.local from pod dns-5770/dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 13 18:25:58.328: INFO: Lookups using dns-5770/dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c failed for: [wheezy_udp@dns-test-service-3.dns-5770.svc.cluster.local jessie_udp@dns-test-service-3.dns-5770.svc.cluster.local] Aug 13 18:26:03.331: INFO: File wheezy_udp@dns-test-service-3.dns-5770.svc.cluster.local from pod dns-5770/dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 13 18:26:03.334: INFO: File jessie_udp@dns-test-service-3.dns-5770.svc.cluster.local from pod dns-5770/dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 13 18:26:03.335: INFO: Lookups using dns-5770/dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c failed for: [wheezy_udp@dns-test-service-3.dns-5770.svc.cluster.local jessie_udp@dns-test-service-3.dns-5770.svc.cluster.local] Aug 13 18:26:08.328: INFO: File jessie_udp@dns-test-service-3.dns-5770.svc.cluster.local from pod dns-5770/dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c contains 'foo.example.com. ' instead of 'bar.example.com.' Aug 13 18:26:08.328: INFO: Lookups using dns-5770/dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c failed for: [jessie_udp@dns-test-service-3.dns-5770.svc.cluster.local] Aug 13 18:26:13.329: INFO: DNS probes using dns-test-bfce10ee-9d17-4273-b6b6-1aab8b89948c succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5770.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5770.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5770.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5770.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 13 18:26:22.528: INFO: DNS probes using dns-test-e68198b5-8b35-470d-9daa-d5c52356b118 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:26:22.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5770" for this suite. • [SLOW TEST:48.942 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":275,"completed":53,"skipped":807,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:26:22.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:26:23.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config version' Aug 13 18:26:23.944: INFO: stderr: "" Aug 13 18:26:23.944: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.5\", GitCommit:\"e6503f8d8f769ace2f338794c914a96fc335df0f\", GitTreeState:\"clean\", BuildDate:\"2020-08-13T14:50:34Z\", GoVersion:\"go1.13.15\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"18\", GitVersion:\"v1.18.4\", GitCommit:\"c96aede7b5205121079932896c4ad89bb93260af\", GitTreeState:\"clean\", BuildDate:\"2020-06-20T01:49:49Z\", GoVersion:\"go1.13.9\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:26:23.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7443" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":275,"completed":54,"skipped":835,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:26:23.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:26:24.194: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e6f943ae-88b4-4c26-8be2-813633093613", Controller:(*bool)(0xc0035a319a), BlockOwnerDeletion:(*bool)(0xc0035a319b)}} Aug 13 18:26:24.262: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"ab90c2b8-aa13-4896-88a1-0674b257de67", Controller:(*bool)(0xc003cb7eba), BlockOwnerDeletion:(*bool)(0xc003cb7ebb)}} Aug 13 18:26:24.292: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"da073677-55da-498e-b649-9c5683512004", Controller:(*bool)(0xc002146352), BlockOwnerDeletion:(*bool)(0xc002146353)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:26:29.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6997" for this suite. • [SLOW TEST:5.665 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":275,"completed":55,"skipped":906,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:26:29.618: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:26:42.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8417" for this suite. • [SLOW TEST:12.980 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":275,"completed":56,"skipped":906,"failed":0} SSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:26:42.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:26:42.719: INFO: Creating ReplicaSet my-hostname-basic-dd611244-7d3a-4cfc-9155-12a1e326afd6 Aug 13 18:26:42.728: INFO: Pod name my-hostname-basic-dd611244-7d3a-4cfc-9155-12a1e326afd6: Found 0 pods out of 1 Aug 13 18:26:47.733: INFO: Pod name my-hostname-basic-dd611244-7d3a-4cfc-9155-12a1e326afd6: Found 1 pods out of 1 Aug 13 18:26:47.733: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-dd611244-7d3a-4cfc-9155-12a1e326afd6" is running Aug 13 18:26:47.737: INFO: Pod "my-hostname-basic-dd611244-7d3a-4cfc-9155-12a1e326afd6-lt6q4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-13 18:26:42 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-13 18:26:46 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-13 18:26:46 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-13 18:26:42 +0000 UTC Reason: Message:}]) Aug 13 18:26:47.737: INFO: Trying to dial the pod Aug 13 18:26:52.749: INFO: Controller my-hostname-basic-dd611244-7d3a-4cfc-9155-12a1e326afd6: Got expected result from replica 1 [my-hostname-basic-dd611244-7d3a-4cfc-9155-12a1e326afd6-lt6q4]: "my-hostname-basic-dd611244-7d3a-4cfc-9155-12a1e326afd6-lt6q4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:26:52.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3155" for this suite. • [SLOW TEST:10.158 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":275,"completed":57,"skipped":909,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:26:52.756: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 13 18:26:53.541: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 13 18:26:55.550: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940013, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940013, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940013, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940013, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 13 18:26:57.554: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940013, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940013, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940013, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940013, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 13 18:27:00.584: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:27:01.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4053" for this suite. STEP: Destroying namespace "webhook-4053-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.220 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":275,"completed":58,"skipped":909,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:27:01.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 13 18:27:07.466: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:27:07.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3561" for this suite. • [SLOW TEST:5.771 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":275,"completed":59,"skipped":943,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:27:07.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 13 18:27:07.864: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bc7ff366-df7e-4a89-b46d-97c2c46596e1" in namespace "projected-9963" to be "Succeeded or Failed" Aug 13 18:27:07.917: INFO: Pod "downwardapi-volume-bc7ff366-df7e-4a89-b46d-97c2c46596e1": Phase="Pending", Reason="", readiness=false. Elapsed: 52.909679ms Aug 13 18:27:10.094: INFO: Pod "downwardapi-volume-bc7ff366-df7e-4a89-b46d-97c2c46596e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230320374s Aug 13 18:27:12.098: INFO: Pod "downwardapi-volume-bc7ff366-df7e-4a89-b46d-97c2c46596e1": Phase="Running", Reason="", readiness=true. Elapsed: 4.234454295s Aug 13 18:27:14.103: INFO: Pod "downwardapi-volume-bc7ff366-df7e-4a89-b46d-97c2c46596e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.238466713s STEP: Saw pod success Aug 13 18:27:14.103: INFO: Pod "downwardapi-volume-bc7ff366-df7e-4a89-b46d-97c2c46596e1" satisfied condition "Succeeded or Failed" Aug 13 18:27:14.106: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-bc7ff366-df7e-4a89-b46d-97c2c46596e1 container client-container: STEP: delete the pod Aug 13 18:27:14.511: INFO: Waiting for pod downwardapi-volume-bc7ff366-df7e-4a89-b46d-97c2c46596e1 to disappear Aug 13 18:27:14.683: INFO: Pod downwardapi-volume-bc7ff366-df7e-4a89-b46d-97c2c46596e1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:27:14.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9963" for this suite. • [SLOW TEST:6.946 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":275,"completed":60,"skipped":957,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:27:14.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 13 18:27:15.330: INFO: Waiting up to 5m0s for pod "pod-cc5a7b80-c556-474e-bb0e-e61792327061" in namespace "emptydir-5212" to be "Succeeded or Failed" Aug 13 18:27:15.389: INFO: Pod "pod-cc5a7b80-c556-474e-bb0e-e61792327061": Phase="Pending", Reason="", readiness=false. Elapsed: 59.078493ms Aug 13 18:27:17.483: INFO: Pod "pod-cc5a7b80-c556-474e-bb0e-e61792327061": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15375309s Aug 13 18:27:19.488: INFO: Pod "pod-cc5a7b80-c556-474e-bb0e-e61792327061": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.158371081s STEP: Saw pod success Aug 13 18:27:19.488: INFO: Pod "pod-cc5a7b80-c556-474e-bb0e-e61792327061" satisfied condition "Succeeded or Failed" Aug 13 18:27:19.491: INFO: Trying to get logs from node kali-worker pod pod-cc5a7b80-c556-474e-bb0e-e61792327061 container test-container: STEP: delete the pod Aug 13 18:27:19.935: INFO: Waiting for pod pod-cc5a7b80-c556-474e-bb0e-e61792327061 to disappear Aug 13 18:27:19.964: INFO: Pod pod-cc5a7b80-c556-474e-bb0e-e61792327061 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:27:19.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5212" for this suite. • [SLOW TEST:5.275 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":61,"skipped":1010,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:27:19.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0813 18:27:21.688022 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 13 18:27:21.688: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:27:21.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1193" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":275,"completed":62,"skipped":1012,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:27:21.697: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0813 18:27:32.424605 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 13 18:27:32.424: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:27:32.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6111" for this suite. • [SLOW TEST:10.734 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":275,"completed":63,"skipped":1043,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:27:32.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Aug 13 18:27:32.531: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:27:41.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5714" for this suite. • [SLOW TEST:8.897 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":275,"completed":64,"skipped":1083,"failed":0} [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:27:41.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:157 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:27:41.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2849" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":275,"completed":65,"skipped":1083,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:27:41.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 13 18:27:42.906: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 13 18:27:44.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940062, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940062, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940062, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940062, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 13 18:27:47.191: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940062, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940062, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940062, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940062, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 13 18:27:50.097: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:28:02.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-355" for this suite. STEP: Destroying namespace "webhook-355-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:22.748 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":275,"completed":66,"skipped":1119,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:28:04.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:28:16.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6466" for this suite. • [SLOW TEST:12.074 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":275,"completed":67,"skipped":1178,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:28:16.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-3519a8d0-f538-4a6f-a69c-8ff817de7ce5 STEP: Creating a pod to test consume secrets Aug 13 18:28:16.611: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a6e2230d-0e2e-4d64-b62f-8a369e9d1415" in namespace "projected-4242" to be "Succeeded or Failed" Aug 13 18:28:16.639: INFO: Pod "pod-projected-secrets-a6e2230d-0e2e-4d64-b62f-8a369e9d1415": Phase="Pending", Reason="", readiness=false. Elapsed: 28.154603ms Aug 13 18:28:18.685: INFO: Pod "pod-projected-secrets-a6e2230d-0e2e-4d64-b62f-8a369e9d1415": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073725467s Aug 13 18:28:20.922: INFO: Pod "pod-projected-secrets-a6e2230d-0e2e-4d64-b62f-8a369e9d1415": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.3114894s STEP: Saw pod success Aug 13 18:28:20.922: INFO: Pod "pod-projected-secrets-a6e2230d-0e2e-4d64-b62f-8a369e9d1415" satisfied condition "Succeeded or Failed" Aug 13 18:28:21.050: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-a6e2230d-0e2e-4d64-b62f-8a369e9d1415 container projected-secret-volume-test: STEP: delete the pod Aug 13 18:28:21.086: INFO: Waiting for pod pod-projected-secrets-a6e2230d-0e2e-4d64-b62f-8a369e9d1415 to disappear Aug 13 18:28:21.098: INFO: Pod pod-projected-secrets-a6e2230d-0e2e-4d64-b62f-8a369e9d1415 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:28:21.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4242" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":68,"skipped":1185,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:28:21.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 13 18:28:21.206: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f1a1d5c-65f7-4f14-bb45-ae38ac7923b1" in namespace "downward-api-148" to be "Succeeded or Failed" Aug 13 18:28:21.228: INFO: Pod "downwardapi-volume-4f1a1d5c-65f7-4f14-bb45-ae38ac7923b1": Phase="Pending", Reason="", readiness=false. Elapsed: 22.017839ms Aug 13 18:28:23.544: INFO: Pod "downwardapi-volume-4f1a1d5c-65f7-4f14-bb45-ae38ac7923b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.337851167s Aug 13 18:28:25.548: INFO: Pod "downwardapi-volume-4f1a1d5c-65f7-4f14-bb45-ae38ac7923b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.342207119s STEP: Saw pod success Aug 13 18:28:25.548: INFO: Pod "downwardapi-volume-4f1a1d5c-65f7-4f14-bb45-ae38ac7923b1" satisfied condition "Succeeded or Failed" Aug 13 18:28:25.551: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-4f1a1d5c-65f7-4f14-bb45-ae38ac7923b1 container client-container: STEP: delete the pod Aug 13 18:28:25.574: INFO: Waiting for pod downwardapi-volume-4f1a1d5c-65f7-4f14-bb45-ae38ac7923b1 to disappear Aug 13 18:28:25.578: INFO: Pod downwardapi-volume-4f1a1d5c-65f7-4f14-bb45-ae38ac7923b1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:28:25.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-148" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":69,"skipped":1188,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:28:25.586: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:28:31.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4350" for this suite. • [SLOW TEST:6.035 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":275,"completed":70,"skipped":1200,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:28:31.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name s-test-opt-del-811d40ba-8564-45a3-981c-79a20f1c2998 STEP: Creating secret with name s-test-opt-upd-0a4d47ef-d3e5-4d6c-ae30-4c60139ae258 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-811d40ba-8564-45a3-981c-79a20f1c2998 STEP: Updating secret s-test-opt-upd-0a4d47ef-d3e5-4d6c-ae30-4c60139ae258 STEP: Creating secret with name s-test-opt-create-5c93b9ad-e4c9-48f0-b78f-acfd932c3d6d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:30:08.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9314" for this suite. • [SLOW TEST:96.701 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":71,"skipped":1217,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:30:08.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:30:09.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5432" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":275,"completed":72,"skipped":1223,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:30:09.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward api env vars Aug 13 18:30:09.275: INFO: Waiting up to 5m0s for pod "downward-api-850696be-bc12-453d-a310-8ffa66298177" in namespace "downward-api-8656" to be "Succeeded or Failed" Aug 13 18:30:09.291: INFO: Pod "downward-api-850696be-bc12-453d-a310-8ffa66298177": Phase="Pending", Reason="", readiness=false. Elapsed: 15.984211ms Aug 13 18:30:11.322: INFO: Pod "downward-api-850696be-bc12-453d-a310-8ffa66298177": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04704557s Aug 13 18:30:13.329: INFO: Pod "downward-api-850696be-bc12-453d-a310-8ffa66298177": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054540458s Aug 13 18:30:15.466: INFO: Pod "downward-api-850696be-bc12-453d-a310-8ffa66298177": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.190825065s STEP: Saw pod success Aug 13 18:30:15.466: INFO: Pod "downward-api-850696be-bc12-453d-a310-8ffa66298177" satisfied condition "Succeeded or Failed" Aug 13 18:30:15.468: INFO: Trying to get logs from node kali-worker pod downward-api-850696be-bc12-453d-a310-8ffa66298177 container dapi-container: STEP: delete the pod Aug 13 18:30:15.564: INFO: Waiting for pod downward-api-850696be-bc12-453d-a310-8ffa66298177 to disappear Aug 13 18:30:15.831: INFO: Pod downward-api-850696be-bc12-453d-a310-8ffa66298177 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:30:15.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8656" for this suite. • [SLOW TEST:6.692 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":275,"completed":73,"skipped":1235,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:30:15.839: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating service nodeport-test with type=NodePort in namespace services-3259 STEP: creating replication controller nodeport-test in namespace services-3259 I0813 18:30:16.686248 7 runners.go:190] Created replication controller with name: nodeport-test, namespace: services-3259, replica count: 2 I0813 18:30:19.736978 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0813 18:30:22.737239 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0813 18:30:25.737486 7 runners.go:190] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 13 18:30:25.737: INFO: Creating new exec pod Aug 13 18:30:31.003: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3259 execpodrx6gm -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Aug 13 18:30:31.281: INFO: stderr: "I0813 18:30:31.193911 958 log.go:172] (0xc0009d69a0) (0xc0009180a0) Create stream\nI0813 18:30:31.193973 958 log.go:172] (0xc0009d69a0) (0xc0009180a0) Stream added, broadcasting: 1\nI0813 18:30:31.196902 958 log.go:172] (0xc0009d69a0) Reply frame received for 1\nI0813 18:30:31.196956 958 log.go:172] (0xc0009d69a0) (0xc0006c12c0) Create stream\nI0813 18:30:31.196975 958 log.go:172] (0xc0009d69a0) (0xc0006c12c0) Stream added, broadcasting: 3\nI0813 18:30:31.198072 958 log.go:172] (0xc0009d69a0) Reply frame received for 3\nI0813 18:30:31.198114 958 log.go:172] (0xc0009d69a0) (0xc000918140) Create stream\nI0813 18:30:31.198133 958 log.go:172] (0xc0009d69a0) (0xc000918140) Stream added, broadcasting: 5\nI0813 18:30:31.199269 958 log.go:172] (0xc0009d69a0) Reply frame received for 5\nI0813 18:30:31.271407 958 log.go:172] (0xc0009d69a0) Data frame received for 5\nI0813 18:30:31.271429 958 log.go:172] (0xc000918140) (5) Data frame handling\nI0813 18:30:31.271445 958 log.go:172] (0xc000918140) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0813 18:30:31.272183 958 log.go:172] (0xc0009d69a0) Data frame received for 5\nI0813 18:30:31.272218 958 log.go:172] (0xc000918140) (5) Data frame handling\nI0813 18:30:31.272229 958 log.go:172] (0xc000918140) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0813 18:30:31.272264 958 log.go:172] (0xc0009d69a0) Data frame received for 3\nI0813 18:30:31.272276 958 log.go:172] (0xc0006c12c0) (3) Data frame handling\nI0813 18:30:31.272615 958 log.go:172] (0xc0009d69a0) Data frame received for 5\nI0813 18:30:31.272639 958 log.go:172] (0xc000918140) (5) Data frame handling\nI0813 18:30:31.274042 958 log.go:172] (0xc0009d69a0) Data frame received for 1\nI0813 18:30:31.274060 958 log.go:172] (0xc0009180a0) (1) Data frame handling\nI0813 18:30:31.274069 958 log.go:172] (0xc0009180a0) (1) Data frame sent\nI0813 18:30:31.274082 958 log.go:172] (0xc0009d69a0) (0xc0009180a0) Stream removed, broadcasting: 1\nI0813 18:30:31.274139 958 log.go:172] (0xc0009d69a0) Go away received\nI0813 18:30:31.274394 958 log.go:172] (0xc0009d69a0) (0xc0009180a0) Stream removed, broadcasting: 1\nI0813 18:30:31.274414 958 log.go:172] (0xc0009d69a0) (0xc0006c12c0) Stream removed, broadcasting: 3\nI0813 18:30:31.274424 958 log.go:172] (0xc0009d69a0) (0xc000918140) Stream removed, broadcasting: 5\n" Aug 13 18:30:31.281: INFO: stdout: "" Aug 13 18:30:31.282: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3259 execpodrx6gm -- /bin/sh -x -c nc -zv -t -w 2 10.111.19.26 80' Aug 13 18:30:31.485: INFO: stderr: "I0813 18:30:31.408227 978 log.go:172] (0xc000a13550) (0xc000a045a0) Create stream\nI0813 18:30:31.408279 978 log.go:172] (0xc000a13550) (0xc000a045a0) Stream added, broadcasting: 1\nI0813 18:30:31.412959 978 log.go:172] (0xc000a13550) Reply frame received for 1\nI0813 18:30:31.412985 978 log.go:172] (0xc000a13550) (0xc0007cf680) Create stream\nI0813 18:30:31.412992 978 log.go:172] (0xc000a13550) (0xc0007cf680) Stream added, broadcasting: 3\nI0813 18:30:31.413831 978 log.go:172] (0xc000a13550) Reply frame received for 3\nI0813 18:30:31.413873 978 log.go:172] (0xc000a13550) (0xc0005e8aa0) Create stream\nI0813 18:30:31.413888 978 log.go:172] (0xc000a13550) (0xc0005e8aa0) Stream added, broadcasting: 5\nI0813 18:30:31.414739 978 log.go:172] (0xc000a13550) Reply frame received for 5\nI0813 18:30:31.479123 978 log.go:172] (0xc000a13550) Data frame received for 5\nI0813 18:30:31.479167 978 log.go:172] (0xc0005e8aa0) (5) Data frame handling\nI0813 18:30:31.479181 978 log.go:172] (0xc0005e8aa0) (5) Data frame sent\nI0813 18:30:31.479191 978 log.go:172] (0xc000a13550) Data frame received for 5\nI0813 18:30:31.479196 978 log.go:172] (0xc0005e8aa0) (5) Data frame handling\n+ nc -zv -t -w 2 10.111.19.26 80\nConnection to 10.111.19.26 80 port [tcp/http] succeeded!\nI0813 18:30:31.479217 978 log.go:172] (0xc000a13550) Data frame received for 3\nI0813 18:30:31.479226 978 log.go:172] (0xc0007cf680) (3) Data frame handling\nI0813 18:30:31.480353 978 log.go:172] (0xc000a13550) Data frame received for 1\nI0813 18:30:31.480373 978 log.go:172] (0xc000a045a0) (1) Data frame handling\nI0813 18:30:31.480385 978 log.go:172] (0xc000a045a0) (1) Data frame sent\nI0813 18:30:31.480398 978 log.go:172] (0xc000a13550) (0xc000a045a0) Stream removed, broadcasting: 1\nI0813 18:30:31.480415 978 log.go:172] (0xc000a13550) Go away received\nI0813 18:30:31.480689 978 log.go:172] (0xc000a13550) (0xc000a045a0) Stream removed, broadcasting: 1\nI0813 18:30:31.480701 978 log.go:172] (0xc000a13550) (0xc0007cf680) Stream removed, broadcasting: 3\nI0813 18:30:31.480706 978 log.go:172] (0xc000a13550) (0xc0005e8aa0) Stream removed, broadcasting: 5\n" Aug 13 18:30:31.486: INFO: stdout: "" Aug 13 18:30:31.486: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3259 execpodrx6gm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30734' Aug 13 18:30:31.692: INFO: stderr: "I0813 18:30:31.611115 998 log.go:172] (0xc000a189a0) (0xc0007ad540) Create stream\nI0813 18:30:31.611194 998 log.go:172] (0xc000a189a0) (0xc0007ad540) Stream added, broadcasting: 1\nI0813 18:30:31.613391 998 log.go:172] (0xc000a189a0) Reply frame received for 1\nI0813 18:30:31.613418 998 log.go:172] (0xc000a189a0) (0xc0007ad5e0) Create stream\nI0813 18:30:31.613426 998 log.go:172] (0xc000a189a0) (0xc0007ad5e0) Stream added, broadcasting: 3\nI0813 18:30:31.614138 998 log.go:172] (0xc000a189a0) Reply frame received for 3\nI0813 18:30:31.614181 998 log.go:172] (0xc000a189a0) (0xc0007ad680) Create stream\nI0813 18:30:31.614191 998 log.go:172] (0xc000a189a0) (0xc0007ad680) Stream added, broadcasting: 5\nI0813 18:30:31.614845 998 log.go:172] (0xc000a189a0) Reply frame received for 5\nI0813 18:30:31.685374 998 log.go:172] (0xc000a189a0) Data frame received for 3\nI0813 18:30:31.685425 998 log.go:172] (0xc0007ad5e0) (3) Data frame handling\nI0813 18:30:31.685451 998 log.go:172] (0xc000a189a0) Data frame received for 5\nI0813 18:30:31.685464 998 log.go:172] (0xc0007ad680) (5) Data frame handling\nI0813 18:30:31.685479 998 log.go:172] (0xc0007ad680) (5) Data frame sent\nI0813 18:30:31.685495 998 log.go:172] (0xc000a189a0) Data frame received for 5\nI0813 18:30:31.685506 998 log.go:172] (0xc0007ad680) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 30734\nConnection to 172.18.0.13 30734 port [tcp/30734] succeeded!\nI0813 18:30:31.686720 998 log.go:172] (0xc000a189a0) Data frame received for 1\nI0813 18:30:31.686738 998 log.go:172] (0xc0007ad540) (1) Data frame handling\nI0813 18:30:31.686748 998 log.go:172] (0xc0007ad540) (1) Data frame sent\nI0813 18:30:31.686760 998 log.go:172] (0xc000a189a0) (0xc0007ad540) Stream removed, broadcasting: 1\nI0813 18:30:31.686779 998 log.go:172] (0xc000a189a0) Go away received\nI0813 18:30:31.687256 998 log.go:172] (0xc000a189a0) (0xc0007ad540) Stream removed, broadcasting: 1\nI0813 18:30:31.687283 998 log.go:172] (0xc000a189a0) (0xc0007ad5e0) Stream removed, broadcasting: 3\nI0813 18:30:31.687295 998 log.go:172] (0xc000a189a0) (0xc0007ad680) Stream removed, broadcasting: 5\n" Aug 13 18:30:31.693: INFO: stdout: "" Aug 13 18:30:31.693: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3259 execpodrx6gm -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30734' Aug 13 18:30:31.898: INFO: stderr: "I0813 18:30:31.820243 1018 log.go:172] (0xc0009a0fd0) (0xc00097e140) Create stream\nI0813 18:30:31.820294 1018 log.go:172] (0xc0009a0fd0) (0xc00097e140) Stream added, broadcasting: 1\nI0813 18:30:31.825413 1018 log.go:172] (0xc0009a0fd0) Reply frame received for 1\nI0813 18:30:31.825523 1018 log.go:172] (0xc0009a0fd0) (0xc000a983c0) Create stream\nI0813 18:30:31.825551 1018 log.go:172] (0xc0009a0fd0) (0xc000a983c0) Stream added, broadcasting: 3\nI0813 18:30:31.826772 1018 log.go:172] (0xc0009a0fd0) Reply frame received for 3\nI0813 18:30:31.826803 1018 log.go:172] (0xc0009a0fd0) (0xc000535ea0) Create stream\nI0813 18:30:31.826814 1018 log.go:172] (0xc0009a0fd0) (0xc000535ea0) Stream added, broadcasting: 5\nI0813 18:30:31.827486 1018 log.go:172] (0xc0009a0fd0) Reply frame received for 5\nI0813 18:30:31.890185 1018 log.go:172] (0xc0009a0fd0) Data frame received for 3\nI0813 18:30:31.890216 1018 log.go:172] (0xc000a983c0) (3) Data frame handling\nI0813 18:30:31.890235 1018 log.go:172] (0xc0009a0fd0) Data frame received for 5\nI0813 18:30:31.890244 1018 log.go:172] (0xc000535ea0) (5) Data frame handling\nI0813 18:30:31.890256 1018 log.go:172] (0xc000535ea0) (5) Data frame sent\nI0813 18:30:31.890264 1018 log.go:172] (0xc0009a0fd0) Data frame received for 5\nI0813 18:30:31.890271 1018 log.go:172] (0xc000535ea0) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.15 30734\nConnection to 172.18.0.15 30734 port [tcp/30734] succeeded!\nI0813 18:30:31.891733 1018 log.go:172] (0xc0009a0fd0) Data frame received for 1\nI0813 18:30:31.891750 1018 log.go:172] (0xc00097e140) (1) Data frame handling\nI0813 18:30:31.891763 1018 log.go:172] (0xc00097e140) (1) Data frame sent\nI0813 18:30:31.891780 1018 log.go:172] (0xc0009a0fd0) (0xc00097e140) Stream removed, broadcasting: 1\nI0813 18:30:31.891800 1018 log.go:172] (0xc0009a0fd0) Go away received\nI0813 18:30:31.892185 1018 log.go:172] (0xc0009a0fd0) (0xc00097e140) Stream removed, broadcasting: 1\nI0813 18:30:31.892202 1018 log.go:172] (0xc0009a0fd0) (0xc000a983c0) Stream removed, broadcasting: 3\nI0813 18:30:31.892212 1018 log.go:172] (0xc0009a0fd0) (0xc000535ea0) Stream removed, broadcasting: 5\n" Aug 13 18:30:31.898: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:30:31.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3259" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 • [SLOW TEST:16.066 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":275,"completed":74,"skipped":1252,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:30:31.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:30:49.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7904" for this suite. • [SLOW TEST:17.372 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":275,"completed":75,"skipped":1263,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:30:49.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-secret-ntfj STEP: Creating a pod to test atomic-volume-subpath Aug 13 18:30:49.414: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-ntfj" in namespace "subpath-9229" to be "Succeeded or Failed" Aug 13 18:30:49.472: INFO: Pod "pod-subpath-test-secret-ntfj": Phase="Pending", Reason="", readiness=false. Elapsed: 57.33187ms Aug 13 18:30:51.555: INFO: Pod "pod-subpath-test-secret-ntfj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.140403288s Aug 13 18:30:53.559: INFO: Pod "pod-subpath-test-secret-ntfj": Phase="Running", Reason="", readiness=true. Elapsed: 4.144984787s Aug 13 18:30:55.563: INFO: Pod "pod-subpath-test-secret-ntfj": Phase="Running", Reason="", readiness=true. Elapsed: 6.148947382s Aug 13 18:30:57.753: INFO: Pod "pod-subpath-test-secret-ntfj": Phase="Running", Reason="", readiness=true. Elapsed: 8.338779137s Aug 13 18:30:59.770: INFO: Pod "pod-subpath-test-secret-ntfj": Phase="Running", Reason="", readiness=true. Elapsed: 10.355765608s Aug 13 18:31:01.789: INFO: Pod "pod-subpath-test-secret-ntfj": Phase="Running", Reason="", readiness=true. Elapsed: 12.374432615s Aug 13 18:31:03.793: INFO: Pod "pod-subpath-test-secret-ntfj": Phase="Running", Reason="", readiness=true. Elapsed: 14.378385404s Aug 13 18:31:05.797: INFO: Pod "pod-subpath-test-secret-ntfj": Phase="Running", Reason="", readiness=true. Elapsed: 16.382383877s Aug 13 18:31:07.801: INFO: Pod "pod-subpath-test-secret-ntfj": Phase="Running", Reason="", readiness=true. Elapsed: 18.386887405s Aug 13 18:31:09.891: INFO: Pod "pod-subpath-test-secret-ntfj": Phase="Running", Reason="", readiness=true. Elapsed: 20.476717851s Aug 13 18:31:11.999: INFO: Pod "pod-subpath-test-secret-ntfj": Phase="Running", Reason="", readiness=true. Elapsed: 22.584314701s Aug 13 18:31:14.003: INFO: Pod "pod-subpath-test-secret-ntfj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.588456572s STEP: Saw pod success Aug 13 18:31:14.003: INFO: Pod "pod-subpath-test-secret-ntfj" satisfied condition "Succeeded or Failed" Aug 13 18:31:14.005: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-secret-ntfj container test-container-subpath-secret-ntfj: STEP: delete the pod Aug 13 18:31:14.090: INFO: Waiting for pod pod-subpath-test-secret-ntfj to disappear Aug 13 18:31:14.101: INFO: Pod pod-subpath-test-secret-ntfj no longer exists STEP: Deleting pod pod-subpath-test-secret-ntfj Aug 13 18:31:14.101: INFO: Deleting pod "pod-subpath-test-secret-ntfj" in namespace "subpath-9229" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:31:14.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9229" for this suite. • [SLOW TEST:24.837 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":275,"completed":76,"skipped":1281,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:31:14.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 13 18:31:14.301: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df91c250-fbc9-4fad-8b2b-da0ff8daefae" in namespace "downward-api-3561" to be "Succeeded or Failed" Aug 13 18:31:14.371: INFO: Pod "downwardapi-volume-df91c250-fbc9-4fad-8b2b-da0ff8daefae": Phase="Pending", Reason="", readiness=false. Elapsed: 69.688387ms Aug 13 18:31:16.478: INFO: Pod "downwardapi-volume-df91c250-fbc9-4fad-8b2b-da0ff8daefae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176468415s Aug 13 18:31:18.482: INFO: Pod "downwardapi-volume-df91c250-fbc9-4fad-8b2b-da0ff8daefae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.180917508s Aug 13 18:31:20.486: INFO: Pod "downwardapi-volume-df91c250-fbc9-4fad-8b2b-da0ff8daefae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.185321764s STEP: Saw pod success Aug 13 18:31:20.486: INFO: Pod "downwardapi-volume-df91c250-fbc9-4fad-8b2b-da0ff8daefae" satisfied condition "Succeeded or Failed" Aug 13 18:31:20.489: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-df91c250-fbc9-4fad-8b2b-da0ff8daefae container client-container: STEP: delete the pod Aug 13 18:31:20.525: INFO: Waiting for pod downwardapi-volume-df91c250-fbc9-4fad-8b2b-da0ff8daefae to disappear Aug 13 18:31:20.539: INFO: Pod downwardapi-volume-df91c250-fbc9-4fad-8b2b-da0ff8daefae no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:31:20.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3561" for this suite. • [SLOW TEST:6.431 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":77,"skipped":1294,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:31:20.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-e533b19f-fcc4-4f94-82c5-0108d3abb581 STEP: Creating a pod to test consume secrets Aug 13 18:31:20.658: INFO: Waiting up to 5m0s for pod "pod-secrets-6a9c6ae6-ec97-422b-8754-70e26127a009" in namespace "secrets-1552" to be "Succeeded or Failed" Aug 13 18:31:20.716: INFO: Pod "pod-secrets-6a9c6ae6-ec97-422b-8754-70e26127a009": Phase="Pending", Reason="", readiness=false. Elapsed: 58.340404ms Aug 13 18:31:22.720: INFO: Pod "pod-secrets-6a9c6ae6-ec97-422b-8754-70e26127a009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062369544s Aug 13 18:31:24.724: INFO: Pod "pod-secrets-6a9c6ae6-ec97-422b-8754-70e26127a009": Phase="Running", Reason="", readiness=true. Elapsed: 4.066374106s Aug 13 18:31:26.728: INFO: Pod "pod-secrets-6a9c6ae6-ec97-422b-8754-70e26127a009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.070467266s STEP: Saw pod success Aug 13 18:31:26.728: INFO: Pod "pod-secrets-6a9c6ae6-ec97-422b-8754-70e26127a009" satisfied condition "Succeeded or Failed" Aug 13 18:31:26.731: INFO: Trying to get logs from node kali-worker pod pod-secrets-6a9c6ae6-ec97-422b-8754-70e26127a009 container secret-env-test: STEP: delete the pod Aug 13 18:31:26.764: INFO: Waiting for pod pod-secrets-6a9c6ae6-ec97-422b-8754-70e26127a009 to disappear Aug 13 18:31:26.776: INFO: Pod pod-secrets-6a9c6ae6-ec97-422b-8754-70e26127a009 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:31:26.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1552" for this suite. • [SLOW TEST:6.241 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:35 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":275,"completed":78,"skipped":1301,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:31:26.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation Aug 13 18:31:26.899: INFO: >>> kubeConfig: /root/.kube/config Aug 13 18:31:29.865: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:31:40.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1606" for this suite. • [SLOW TEST:13.813 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":275,"completed":79,"skipped":1323,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:31:40.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:31:40.715: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Aug 13 18:31:42.689: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3324 create -f -' Aug 13 18:31:46.097: INFO: stderr: "" Aug 13 18:31:46.097: INFO: stdout: "e2e-test-crd-publish-openapi-2861-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 13 18:31:46.097: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3324 delete e2e-test-crd-publish-openapi-2861-crds test-cr' Aug 13 18:31:46.231: INFO: stderr: "" Aug 13 18:31:46.232: INFO: stdout: "e2e-test-crd-publish-openapi-2861-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" Aug 13 18:31:46.232: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3324 apply -f -' Aug 13 18:31:46.554: INFO: stderr: "" Aug 13 18:31:46.554: INFO: stdout: "e2e-test-crd-publish-openapi-2861-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" Aug 13 18:31:46.555: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3324 delete e2e-test-crd-publish-openapi-2861-crds test-cr' Aug 13 18:31:46.657: INFO: stderr: "" Aug 13 18:31:46.657: INFO: stdout: "e2e-test-crd-publish-openapi-2861-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR Aug 13 18:31:46.657: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2861-crds' Aug 13 18:31:46.914: INFO: stderr: "" Aug 13 18:31:46.914: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2861-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:31:49.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3324" for this suite. • [SLOW TEST:9.254 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":275,"completed":80,"skipped":1323,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:31:49.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-cbbcd8de-e2ab-4fc6-8f0a-1c4441b9db65 in namespace container-probe-5061 Aug 13 18:31:56.050: INFO: Started pod liveness-cbbcd8de-e2ab-4fc6-8f0a-1c4441b9db65 in namespace container-probe-5061 STEP: checking the pod's current state and verifying that restartCount is present Aug 13 18:31:56.053: INFO: Initial restart count of pod liveness-cbbcd8de-e2ab-4fc6-8f0a-1c4441b9db65 is 0 Aug 13 18:32:20.331: INFO: Restart count of pod container-probe-5061/liveness-cbbcd8de-e2ab-4fc6-8f0a-1c4441b9db65 is now 1 (24.278712378s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:32:20.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5061" for this suite. • [SLOW TEST:30.598 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":81,"skipped":1341,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:32:20.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Aug 13 18:32:20.854: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 13 18:32:20.894: INFO: Waiting for terminating namespaces to be deleted... Aug 13 18:32:20.898: INFO: Logging pods the kubelet thinks is on node kali-worker before test Aug 13 18:32:20.906: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 13 18:32:20.906: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 13 18:32:20.906: INFO: rally-466602a1-db17uwyh-z26cp from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:59:13 +0000 UTC (1 container statuses recorded) Aug 13 18:32:20.906: INFO: Container rally-466602a1-db17uwyh ready: false, restart count 0 Aug 13 18:32:20.906: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Aug 13 18:32:20.906: INFO: Container kube-proxy ready: true, restart count 0 Aug 13 18:32:20.906: INFO: rally-466602a1-db17uwyh-6xgdb from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:51:36 +0000 UTC (1 container statuses recorded) Aug 13 18:32:20.906: INFO: Container rally-466602a1-db17uwyh ready: false, restart count 0 Aug 13 18:32:20.906: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded) Aug 13 18:32:20.906: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 13 18:32:20.906: INFO: rally-65f59568-f0r2trzx from c-rally-65f59568-2qd1sysl started at 2020-08-13 18:30:32 +0000 UTC (1 container statuses recorded) Aug 13 18:32:20.906: INFO: Container rally-65f59568-f0r2trzx ready: false, restart count 0 Aug 13 18:32:20.906: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Aug 13 18:32:20.906: INFO: Container kindnet-cni ready: true, restart count 1 Aug 13 18:32:20.906: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Aug 13 18:32:20.925: INFO: rally-7104017d-j5l4uv4e-0 from c-rally-7104017d-2oejvhl7 started at 2020-08-11 18:51:39 +0000 UTC (1 container statuses recorded) Aug 13 18:32:20.925: INFO: Container rally-7104017d-j5l4uv4e ready: true, restart count 1 Aug 13 18:32:20.925: INFO: rally-6c5ea4be-96nyoha6-75976ff4d6-kqnxh from c-rally-6c5ea4be-pyo3sp3v started at 2020-08-11 18:16:03 +0000 UTC (1 container statuses recorded) Aug 13 18:32:20.925: INFO: Container rally-6c5ea4be-96nyoha6 ready: true, restart count 51 Aug 13 18:32:20.925: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 13 18:32:20.925: INFO: Container kindnet-cni ready: true, restart count 1 Aug 13 18:32:20.925: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 13 18:32:20.925: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 13 18:32:20.925: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 13 18:32:20.925: INFO: Container kube-proxy ready: true, restart count 0 Aug 13 18:32:20.925: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded) Aug 13 18:32:20.925: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-184c0739-0762-4a75-917b-758a2281a8f1 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-184c0739-0762-4a75-917b-758a2281a8f1 off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-184c0739-0762-4a75-917b-758a2281a8f1 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:32:38.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6531" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:17.604 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":275,"completed":82,"skipped":1355,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:32:38.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: set up a multi version CRD Aug 13 18:32:38.514: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:32:55.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8141" for this suite. • [SLOW TEST:17.584 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":275,"completed":83,"skipped":1367,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:32:55.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Aug 13 18:32:56.236: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3815 /api/v1/namespaces/watch-3815/configmaps/e2e-watch-test-configmap-a 3bcf64e0-a4f1-45a3-882a-a194135b150a 9277747 0 2020-08-13 18:32:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-13 18:32:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 13 18:32:56.237: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3815 /api/v1/namespaces/watch-3815/configmaps/e2e-watch-test-configmap-a 3bcf64e0-a4f1-45a3-882a-a194135b150a 9277747 0 2020-08-13 18:32:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-13 18:32:56 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A and ensuring the correct watchers observe the notification Aug 13 18:33:06.243: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3815 /api/v1/namespaces/watch-3815/configmaps/e2e-watch-test-configmap-a 3bcf64e0-a4f1-45a3-882a-a194135b150a 9277784 0 2020-08-13 18:32:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-13 18:33:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 13 18:33:06.244: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3815 /api/v1/namespaces/watch-3815/configmaps/e2e-watch-test-configmap-a 3bcf64e0-a4f1-45a3-882a-a194135b150a 9277784 0 2020-08-13 18:32:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-13 18:33:06 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Aug 13 18:33:16.251: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3815 /api/v1/namespaces/watch-3815/configmaps/e2e-watch-test-configmap-a 3bcf64e0-a4f1-45a3-882a-a194135b150a 9277816 0 2020-08-13 18:32:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-13 18:33:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 13 18:33:16.251: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3815 /api/v1/namespaces/watch-3815/configmaps/e2e-watch-test-configmap-a 3bcf64e0-a4f1-45a3-882a-a194135b150a 9277816 0 2020-08-13 18:32:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-13 18:33:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap A and ensuring the correct watchers observe the notification Aug 13 18:33:26.258: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3815 /api/v1/namespaces/watch-3815/configmaps/e2e-watch-test-configmap-a 3bcf64e0-a4f1-45a3-882a-a194135b150a 9277846 0 2020-08-13 18:32:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-13 18:33:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 13 18:33:26.258: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3815 /api/v1/namespaces/watch-3815/configmaps/e2e-watch-test-configmap-a 3bcf64e0-a4f1-45a3-882a-a194135b150a 9277846 0 2020-08-13 18:32:56 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] [{e2e.test Update v1 2020-08-13 18:33:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Aug 13 18:33:36.266: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3815 /api/v1/namespaces/watch-3815/configmaps/e2e-watch-test-configmap-b b07d6120-7c9a-42c1-a8d7-7f857ccc6d88 9277876 0 2020-08-13 18:33:36 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-13 18:33:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 13 18:33:36.266: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3815 /api/v1/namespaces/watch-3815/configmaps/e2e-watch-test-configmap-b b07d6120-7c9a-42c1-a8d7-7f857ccc6d88 9277876 0 2020-08-13 18:33:36 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-13 18:33:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} STEP: deleting configmap B and ensuring the correct watchers observe the notification Aug 13 18:33:46.274: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3815 /api/v1/namespaces/watch-3815/configmaps/e2e-watch-test-configmap-b b07d6120-7c9a-42c1-a8d7-7f857ccc6d88 9277904 0 2020-08-13 18:33:36 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-13 18:33:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 13 18:33:46.274: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3815 /api/v1/namespaces/watch-3815/configmaps/e2e-watch-test-configmap-b b07d6120-7c9a-42c1-a8d7-7f857ccc6d88 9277904 0 2020-08-13 18:33:36 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] [{e2e.test Update v1 2020-08-13 18:33:36 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:33:56.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3815" for this suite. • [SLOW TEST:60.641 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":275,"completed":84,"skipped":1368,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:33:56.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:34:00.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-554" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":85,"skipped":1375,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:34:00.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 13 18:34:01.251: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 13 18:34:03.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940441, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940441, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940441, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940441, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 13 18:34:05.316: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940441, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940441, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940441, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940441, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 13 18:34:08.305: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:34:08.312: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8927" for this suite. STEP: Destroying namespace "webhook-8927-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.006 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":275,"completed":86,"skipped":1393,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:34:08.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 13 18:34:08.589: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2d1641fc-9c33-4beb-90b4-26c15b510b26" in namespace "downward-api-5904" to be "Succeeded or Failed" Aug 13 18:34:08.593: INFO: Pod "downwardapi-volume-2d1641fc-9c33-4beb-90b4-26c15b510b26": Phase="Pending", Reason="", readiness=false. Elapsed: 3.985408ms Aug 13 18:34:10.596: INFO: Pod "downwardapi-volume-2d1641fc-9c33-4beb-90b4-26c15b510b26": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00784201s Aug 13 18:34:12.601: INFO: Pod "downwardapi-volume-2d1641fc-9c33-4beb-90b4-26c15b510b26": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012216433s STEP: Saw pod success Aug 13 18:34:12.601: INFO: Pod "downwardapi-volume-2d1641fc-9c33-4beb-90b4-26c15b510b26" satisfied condition "Succeeded or Failed" Aug 13 18:34:12.605: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-2d1641fc-9c33-4beb-90b4-26c15b510b26 container client-container: STEP: delete the pod Aug 13 18:34:12.643: INFO: Waiting for pod downwardapi-volume-2d1641fc-9c33-4beb-90b4-26c15b510b26 to disappear Aug 13 18:34:12.879: INFO: Pod downwardapi-volume-2d1641fc-9c33-4beb-90b4-26c15b510b26 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:34:12.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5904" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":87,"skipped":1394,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:34:12.887: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:34:17.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-2604" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":275,"completed":88,"skipped":1401,"failed":0} SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:34:17.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating projection with secret that has name projected-secret-test-3ff1bc45-eeb9-456a-80a2-dceabfb2945d STEP: Creating a pod to test consume secrets Aug 13 18:34:17.212: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f865741e-f11e-41b3-89c5-db5e08119852" in namespace "projected-4146" to be "Succeeded or Failed" Aug 13 18:34:17.227: INFO: Pod "pod-projected-secrets-f865741e-f11e-41b3-89c5-db5e08119852": Phase="Pending", Reason="", readiness=false. Elapsed: 15.663285ms Aug 13 18:34:19.231: INFO: Pod "pod-projected-secrets-f865741e-f11e-41b3-89c5-db5e08119852": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019542397s Aug 13 18:34:21.235: INFO: Pod "pod-projected-secrets-f865741e-f11e-41b3-89c5-db5e08119852": Phase="Running", Reason="", readiness=true. Elapsed: 4.023492101s Aug 13 18:34:23.244: INFO: Pod "pod-projected-secrets-f865741e-f11e-41b3-89c5-db5e08119852": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032440215s STEP: Saw pod success Aug 13 18:34:23.244: INFO: Pod "pod-projected-secrets-f865741e-f11e-41b3-89c5-db5e08119852" satisfied condition "Succeeded or Failed" Aug 13 18:34:23.247: INFO: Trying to get logs from node kali-worker2 pod pod-projected-secrets-f865741e-f11e-41b3-89c5-db5e08119852 container projected-secret-volume-test: STEP: delete the pod Aug 13 18:34:23.554: INFO: Waiting for pod pod-projected-secrets-f865741e-f11e-41b3-89c5-db5e08119852 to disappear Aug 13 18:34:23.699: INFO: Pod pod-projected-secrets-f865741e-f11e-41b3-89c5-db5e08119852 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:34:23.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4146" for this suite. • [SLOW TEST:6.573 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":89,"skipped":1405,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:34:23.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 13 18:34:24.757: INFO: Waiting up to 5m0s for pod "pod-2d4db1cb-5c43-455d-9cdf-722e74f287f4" in namespace "emptydir-6086" to be "Succeeded or Failed" Aug 13 18:34:24.849: INFO: Pod "pod-2d4db1cb-5c43-455d-9cdf-722e74f287f4": Phase="Pending", Reason="", readiness=false. Elapsed: 92.171353ms Aug 13 18:34:27.011: INFO: Pod "pod-2d4db1cb-5c43-455d-9cdf-722e74f287f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.254074045s Aug 13 18:34:29.015: INFO: Pod "pod-2d4db1cb-5c43-455d-9cdf-722e74f287f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.258147586s Aug 13 18:34:31.019: INFO: Pod "pod-2d4db1cb-5c43-455d-9cdf-722e74f287f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.262450461s STEP: Saw pod success Aug 13 18:34:31.019: INFO: Pod "pod-2d4db1cb-5c43-455d-9cdf-722e74f287f4" satisfied condition "Succeeded or Failed" Aug 13 18:34:31.022: INFO: Trying to get logs from node kali-worker pod pod-2d4db1cb-5c43-455d-9cdf-722e74f287f4 container test-container: STEP: delete the pod Aug 13 18:34:31.171: INFO: Waiting for pod pod-2d4db1cb-5c43-455d-9cdf-722e74f287f4 to disappear Aug 13 18:34:31.201: INFO: Pod pod-2d4db1cb-5c43-455d-9cdf-722e74f287f4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:34:31.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6086" for this suite. • [SLOW TEST:7.501 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":90,"skipped":1431,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:34:31.210: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: validating api versions Aug 13 18:34:31.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config api-versions' Aug 13 18:34:31.776: INFO: stderr: "" Aug 13 18:34:31.776: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:34:31.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5882" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":275,"completed":91,"skipped":1453,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:34:31.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating Agnhost RC Aug 13 18:34:31.939: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5401' Aug 13 18:34:32.611: INFO: stderr: "" Aug 13 18:34:32.611: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 13 18:34:33.615: INFO: Selector matched 1 pods for map[app:agnhost] Aug 13 18:34:33.615: INFO: Found 0 / 1 Aug 13 18:34:34.615: INFO: Selector matched 1 pods for map[app:agnhost] Aug 13 18:34:34.615: INFO: Found 0 / 1 Aug 13 18:34:35.670: INFO: Selector matched 1 pods for map[app:agnhost] Aug 13 18:34:35.670: INFO: Found 0 / 1 Aug 13 18:34:36.642: INFO: Selector matched 1 pods for map[app:agnhost] Aug 13 18:34:36.642: INFO: Found 0 / 1 Aug 13 18:34:37.671: INFO: Selector matched 1 pods for map[app:agnhost] Aug 13 18:34:37.672: INFO: Found 1 / 1 Aug 13 18:34:37.672: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 13 18:34:37.886: INFO: Selector matched 1 pods for map[app:agnhost] Aug 13 18:34:37.886: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 13 18:34:37.886: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config patch pod agnhost-master-mc8l8 --namespace=kubectl-5401 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 13 18:34:38.159: INFO: stderr: "" Aug 13 18:34:38.159: INFO: stdout: "pod/agnhost-master-mc8l8 patched\n" STEP: checking annotations Aug 13 18:34:38.208: INFO: Selector matched 1 pods for map[app:agnhost] Aug 13 18:34:38.208: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:34:38.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5401" for this suite. • [SLOW TEST:6.562 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":275,"completed":92,"skipped":1481,"failed":0} SSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:34:38.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91 Aug 13 18:34:38.720: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 13 18:34:38.759: INFO: Waiting for terminating namespaces to be deleted... Aug 13 18:34:38.762: INFO: Logging pods the kubelet thinks is on node kali-worker before test Aug 13 18:34:38.769: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded) Aug 13 18:34:38.769: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 13 18:34:38.769: INFO: agnhost-master-mc8l8 from kubectl-5401 started at 2020-08-13 18:34:32 +0000 UTC (1 container statuses recorded) Aug 13 18:34:38.769: INFO: Container agnhost-master ready: true, restart count 0 Aug 13 18:34:38.769: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded) Aug 13 18:34:38.769: INFO: Container kindnet-cni ready: true, restart count 1 Aug 13 18:34:38.769: INFO: busybox-host-aliases333d421b-bae8-48f6-99a0-f3e4bfe90ab9 from kubelet-test-554 started at 2020-08-13 18:33:56 +0000 UTC (1 container statuses recorded) Aug 13 18:34:38.769: INFO: Container busybox-host-aliases333d421b-bae8-48f6-99a0-f3e4bfe90ab9 ready: false, restart count 0 Aug 13 18:34:38.769: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 13 18:34:38.769: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 Aug 13 18:34:38.769: INFO: rally-466602a1-db17uwyh-z26cp from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:59:13 +0000 UTC (1 container statuses recorded) Aug 13 18:34:38.769: INFO: Container rally-466602a1-db17uwyh ready: false, restart count 0 Aug 13 18:34:38.769: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded) Aug 13 18:34:38.769: INFO: Container kube-proxy ready: true, restart count 0 Aug 13 18:34:38.769: INFO: rally-466602a1-db17uwyh-6xgdb from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:51:36 +0000 UTC (1 container statuses recorded) Aug 13 18:34:38.769: INFO: Container rally-466602a1-db17uwyh ready: false, restart count 0 Aug 13 18:34:38.769: INFO: Logging pods the kubelet thinks is on node kali-worker2 before test Aug 13 18:34:38.774: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded) Aug 13 18:34:38.774: INFO: Container rally-824618b1-6cukkjuh ready: true, restart count 3 Aug 13 18:34:38.774: INFO: rally-7104017d-j5l4uv4e-0 from c-rally-7104017d-2oejvhl7 started at 2020-08-11 18:51:39 +0000 UTC (1 container statuses recorded) Aug 13 18:34:38.774: INFO: Container rally-7104017d-j5l4uv4e ready: true, restart count 1 Aug 13 18:34:38.774: INFO: rally-6c5ea4be-96nyoha6-75976ff4d6-kqnxh from c-rally-6c5ea4be-pyo3sp3v started at 2020-08-11 18:16:03 +0000 UTC (1 container statuses recorded) Aug 13 18:34:38.774: INFO: Container rally-6c5ea4be-96nyoha6 ready: true, restart count 51 Aug 13 18:34:38.774: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 13 18:34:38.774: INFO: Container kube-proxy ready: true, restart count 0 Aug 13 18:34:38.774: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded) Aug 13 18:34:38.774: INFO: Container kindnet-cni ready: true, restart count 1 Aug 13 18:34:38.774: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded) Aug 13 18:34:38.774: INFO: Container rally-19e4df10-30wkw9yu ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-58bb5324-183e-4d52-bacb-aec81974bfca 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-58bb5324-183e-4d52-bacb-aec81974bfca off the node kali-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-58bb5324-183e-4d52-bacb-aec81974bfca [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:34:49.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3585" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82 • [SLOW TEST:11.489 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":275,"completed":93,"skipped":1485,"failed":0} [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:34:49.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 13 18:34:49.959: INFO: Waiting up to 5m0s for pod "pod-ad31578d-0ad8-40de-8791-d800e360ae67" in namespace "emptydir-6847" to be "Succeeded or Failed" Aug 13 18:34:50.043: INFO: Pod "pod-ad31578d-0ad8-40de-8791-d800e360ae67": Phase="Pending", Reason="", readiness=false. Elapsed: 84.139816ms Aug 13 18:34:52.052: INFO: Pod "pod-ad31578d-0ad8-40de-8791-d800e360ae67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.09328904s Aug 13 18:34:54.056: INFO: Pod "pod-ad31578d-0ad8-40de-8791-d800e360ae67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097200897s Aug 13 18:34:56.220: INFO: Pod "pod-ad31578d-0ad8-40de-8791-d800e360ae67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.261199675s STEP: Saw pod success Aug 13 18:34:56.220: INFO: Pod "pod-ad31578d-0ad8-40de-8791-d800e360ae67" satisfied condition "Succeeded or Failed" Aug 13 18:34:56.223: INFO: Trying to get logs from node kali-worker pod pod-ad31578d-0ad8-40de-8791-d800e360ae67 container test-container: STEP: delete the pod Aug 13 18:34:56.771: INFO: Waiting for pod pod-ad31578d-0ad8-40de-8791-d800e360ae67 to disappear Aug 13 18:34:56.855: INFO: Pod pod-ad31578d-0ad8-40de-8791-d800e360ae67 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:34:56.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6847" for this suite. • [SLOW TEST:7.025 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":94,"skipped":1485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:34:56.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod Aug 13 18:34:56.949: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:35:07.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4175" for this suite. • [SLOW TEST:10.280 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":275,"completed":95,"skipped":1508,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:35:07.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1471.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1471.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1471.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 94.109.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.109.94_udp@PTR;check="$$(dig +tcp +noall +answer +search 94.109.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.109.94_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1471.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1471.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1471.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1471.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1471.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1471.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 94.109.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.109.94_udp@PTR;check="$$(dig +tcp +noall +answer +search 94.109.100.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.100.109.94_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 13 18:35:17.808: INFO: Unable to read wheezy_udp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:17.811: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:17.814: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:17.817: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:17.837: INFO: Unable to read jessie_udp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:17.839: INFO: Unable to read jessie_tcp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:17.843: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:17.845: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:17.865: INFO: Lookups using dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257 failed for: [wheezy_udp@dns-test-service.dns-1471.svc.cluster.local wheezy_tcp@dns-test-service.dns-1471.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local jessie_udp@dns-test-service.dns-1471.svc.cluster.local jessie_tcp@dns-test-service.dns-1471.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local] Aug 13 18:35:22.870: INFO: Unable to read wheezy_udp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:22.874: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:22.877: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:22.881: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:23.182: INFO: Unable to read jessie_udp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:23.186: INFO: Unable to read jessie_tcp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:23.189: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:23.192: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:23.206: INFO: Lookups using dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257 failed for: [wheezy_udp@dns-test-service.dns-1471.svc.cluster.local wheezy_tcp@dns-test-service.dns-1471.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local jessie_udp@dns-test-service.dns-1471.svc.cluster.local jessie_tcp@dns-test-service.dns-1471.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local] Aug 13 18:35:27.870: INFO: Unable to read wheezy_udp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:27.875: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:27.878: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:27.881: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:27.911: INFO: Unable to read jessie_udp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:27.914: INFO: Unable to read jessie_tcp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:27.916: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:27.919: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:27.938: INFO: Lookups using dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257 failed for: [wheezy_udp@dns-test-service.dns-1471.svc.cluster.local wheezy_tcp@dns-test-service.dns-1471.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local jessie_udp@dns-test-service.dns-1471.svc.cluster.local jessie_tcp@dns-test-service.dns-1471.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local] Aug 13 18:35:32.870: INFO: Unable to read wheezy_udp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:32.874: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:32.878: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:32.882: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:32.964: INFO: Unable to read jessie_udp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:32.966: INFO: Unable to read jessie_tcp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:32.969: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:32.972: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:32.994: INFO: Lookups using dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257 failed for: [wheezy_udp@dns-test-service.dns-1471.svc.cluster.local wheezy_tcp@dns-test-service.dns-1471.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local jessie_udp@dns-test-service.dns-1471.svc.cluster.local jessie_tcp@dns-test-service.dns-1471.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local] Aug 13 18:35:38.245: INFO: Unable to read wheezy_udp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:38.249: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:38.315: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:38.952: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:38.981: INFO: Unable to read jessie_udp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:38.984: INFO: Unable to read jessie_tcp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:38.986: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:38.989: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:39.004: INFO: Lookups using dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257 failed for: [wheezy_udp@dns-test-service.dns-1471.svc.cluster.local wheezy_tcp@dns-test-service.dns-1471.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local jessie_udp@dns-test-service.dns-1471.svc.cluster.local jessie_tcp@dns-test-service.dns-1471.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1471.svc.cluster.local] Aug 13 18:35:42.886: INFO: Unable to read wheezy_udp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:42.890: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:43.415: INFO: Unable to read jessie_tcp@dns-test-service.dns-1471.svc.cluster.local from pod dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257: the server could not find the requested resource (get pods dns-test-6e1cbe93-5890-43a7-a670-f1c094523257) Aug 13 18:35:43.438: INFO: Lookups using dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257 failed for: [wheezy_udp@dns-test-service.dns-1471.svc.cluster.local wheezy_tcp@dns-test-service.dns-1471.svc.cluster.local jessie_tcp@dns-test-service.dns-1471.svc.cluster.local] Aug 13 18:35:48.388: INFO: DNS probes using dns-1471/dns-test-6e1cbe93-5890-43a7-a670-f1c094523257 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:35:49.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1471" for this suite. • [SLOW TEST:42.096 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":275,"completed":96,"skipped":1524,"failed":0} SS ------------------------------ [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:35:49.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename limitrange STEP: Waiting for a default service account to be provisioned in namespace [It] should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a LimitRange STEP: Setting up watch STEP: Submitting a LimitRange Aug 13 18:35:49.438: INFO: observed the limitRanges list STEP: Verifying LimitRange creation was observed STEP: Fetching the LimitRange to ensure it has proper values Aug 13 18:35:49.455: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 13 18:35:49.456: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with no resource requirements STEP: Ensuring Pod has resource requirements applied from LimitRange Aug 13 18:35:49.471: INFO: Verifying requests: expected map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] with actual map[cpu:{{100 -3} {} 100m DecimalSI} ephemeral-storage:{{214748364800 0} {} BinarySI} memory:{{209715200 0} {} BinarySI}] Aug 13 18:35:49.471: INFO: Verifying limits: expected map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{500 -3} {} 500m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Creating a Pod with partial resource requirements STEP: Ensuring Pod has merged resource requirements applied from LimitRange Aug 13 18:35:49.534: INFO: Verifying requests: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{161061273600 0} {} 150Gi BinarySI} memory:{{157286400 0} {} 150Mi BinarySI}] Aug 13 18:35:49.534: INFO: Verifying limits: expected map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] with actual map[cpu:{{300 -3} {} 300m DecimalSI} ephemeral-storage:{{536870912000 0} {} 500Gi BinarySI} memory:{{524288000 0} {} 500Mi BinarySI}] STEP: Failing to create a Pod with less than min resources STEP: Failing to create a Pod with more than max resources STEP: Updating a LimitRange STEP: Verifying LimitRange updating is effective STEP: Creating a Pod with less than former min resources STEP: Failing to create a Pod with more than max resources STEP: Deleting a LimitRange STEP: Verifying the LimitRange was deleted Aug 13 18:35:57.933: INFO: limitRange is already deleted STEP: Creating a Pod with more than former max resources [AfterEach] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:35:58.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "limitrange-5041" for this suite. • [SLOW TEST:9.532 seconds] [sig-scheduling] LimitRange /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-scheduling] LimitRange should create a LimitRange with defaults and ensure pod has those defaults applied. [Conformance]","total":275,"completed":97,"skipped":1526,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:35:58.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Aug 13 18:36:07.113: INFO: Successfully updated pod "annotationupdate903001dd-6d9a-44bc-b570-d3bf1a450cce" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:36:09.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3121" for this suite. • [SLOW TEST:10.388 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":98,"skipped":1532,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:36:09.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name projected-secret-test-3f0620f5-2d55-44db-92cb-ba3ad1c87c3e STEP: Creating a pod to test consume secrets Aug 13 18:36:09.270: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-abb9eb79-0450-44b2-8211-1330c317ee31" in namespace "projected-3549" to be "Succeeded or Failed" Aug 13 18:36:09.291: INFO: Pod "pod-projected-secrets-abb9eb79-0450-44b2-8211-1330c317ee31": Phase="Pending", Reason="", readiness=false. Elapsed: 20.590118ms Aug 13 18:36:11.307: INFO: Pod "pod-projected-secrets-abb9eb79-0450-44b2-8211-1330c317ee31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036503848s Aug 13 18:36:13.336: INFO: Pod "pod-projected-secrets-abb9eb79-0450-44b2-8211-1330c317ee31": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065750729s Aug 13 18:36:15.340: INFO: Pod "pod-projected-secrets-abb9eb79-0450-44b2-8211-1330c317ee31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069687505s STEP: Saw pod success Aug 13 18:36:15.340: INFO: Pod "pod-projected-secrets-abb9eb79-0450-44b2-8211-1330c317ee31" satisfied condition "Succeeded or Failed" Aug 13 18:36:15.343: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-abb9eb79-0450-44b2-8211-1330c317ee31 container secret-volume-test: STEP: delete the pod Aug 13 18:36:15.999: INFO: Waiting for pod pod-projected-secrets-abb9eb79-0450-44b2-8211-1330c317ee31 to disappear Aug 13 18:36:16.214: INFO: Pod pod-projected-secrets-abb9eb79-0450-44b2-8211-1330c317ee31 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:36:16.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3549" for this suite. • [SLOW TEST:7.168 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":99,"skipped":1533,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:36:16.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Aug 13 18:36:16.487: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8381 /api/v1/namespaces/watch-8381/configmaps/e2e-watch-test-watch-closed d754613b-4a7c-4491-b676-e1437fb4d4cb 9278822 0 2020-08-13 18:36:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-13 18:36:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,} Aug 13 18:36:16.487: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8381 /api/v1/namespaces/watch-8381/configmaps/e2e-watch-test-watch-closed d754613b-4a7c-4491-b676-e1437fb4d4cb 9278823 0 2020-08-13 18:36:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-13 18:36:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Aug 13 18:36:16.612: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8381 /api/v1/namespaces/watch-8381/configmaps/e2e-watch-test-watch-closed d754613b-4a7c-4491-b676-e1437fb4d4cb 9278824 0 2020-08-13 18:36:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-13 18:36:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Aug 13 18:36:16.612: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-8381 /api/v1/namespaces/watch-8381/configmaps/e2e-watch-test-watch-closed d754613b-4a7c-4491-b676-e1437fb4d4cb 9278825 0 2020-08-13 18:36:16 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] [{e2e.test Update v1 2020-08-13 18:36:16 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:36:16.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8381" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":275,"completed":100,"skipped":1554,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:36:16.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test override all Aug 13 18:36:16.762: INFO: Waiting up to 5m0s for pod "client-containers-47dc2eea-91ba-48e9-a7d6-5519e1d92f27" in namespace "containers-7693" to be "Succeeded or Failed" Aug 13 18:36:16.789: INFO: Pod "client-containers-47dc2eea-91ba-48e9-a7d6-5519e1d92f27": Phase="Pending", Reason="", readiness=false. Elapsed: 26.191831ms Aug 13 18:36:19.066: INFO: Pod "client-containers-47dc2eea-91ba-48e9-a7d6-5519e1d92f27": Phase="Pending", Reason="", readiness=false. Elapsed: 2.30389947s Aug 13 18:36:21.070: INFO: Pod "client-containers-47dc2eea-91ba-48e9-a7d6-5519e1d92f27": Phase="Running", Reason="", readiness=true. Elapsed: 4.308035507s Aug 13 18:36:23.118: INFO: Pod "client-containers-47dc2eea-91ba-48e9-a7d6-5519e1d92f27": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.355558379s STEP: Saw pod success Aug 13 18:36:23.118: INFO: Pod "client-containers-47dc2eea-91ba-48e9-a7d6-5519e1d92f27" satisfied condition "Succeeded or Failed" Aug 13 18:36:23.120: INFO: Trying to get logs from node kali-worker pod client-containers-47dc2eea-91ba-48e9-a7d6-5519e1d92f27 container test-container: STEP: delete the pod Aug 13 18:36:23.204: INFO: Waiting for pod client-containers-47dc2eea-91ba-48e9-a7d6-5519e1d92f27 to disappear Aug 13 18:36:23.256: INFO: Pod client-containers-47dc2eea-91ba-48e9-a7d6-5519e1d92f27 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:36:23.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7693" for this suite. • [SLOW TEST:6.637 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":275,"completed":101,"skipped":1575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:36:23.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:36:27.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8019" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":275,"completed":102,"skipped":1609,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:36:27.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:36:27.762: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8081' Aug 13 18:36:28.062: INFO: stderr: "" Aug 13 18:36:28.062: INFO: stdout: "replicationcontroller/agnhost-master created\n" Aug 13 18:36:28.062: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8081' Aug 13 18:36:28.459: INFO: stderr: "" Aug 13 18:36:28.459: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. Aug 13 18:36:29.466: INFO: Selector matched 1 pods for map[app:agnhost] Aug 13 18:36:29.466: INFO: Found 0 / 1 Aug 13 18:36:30.473: INFO: Selector matched 1 pods for map[app:agnhost] Aug 13 18:36:30.473: INFO: Found 0 / 1 Aug 13 18:36:31.480: INFO: Selector matched 1 pods for map[app:agnhost] Aug 13 18:36:31.480: INFO: Found 1 / 1 Aug 13 18:36:31.480: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 13 18:36:31.499: INFO: Selector matched 1 pods for map[app:agnhost] Aug 13 18:36:31.499: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 13 18:36:31.499: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe pod agnhost-master-xfglk --namespace=kubectl-8081' Aug 13 18:36:31.628: INFO: stderr: "" Aug 13 18:36:31.628: INFO: stdout: "Name: agnhost-master-xfglk\nNamespace: kubectl-8081\nPriority: 0\nNode: kali-worker/172.18.0.13\nStart Time: Thu, 13 Aug 2020 18:36:28 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.105\nIPs:\n IP: 10.244.2.105\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://10c653443d6555fcb3a23d20c0a8f436a8ea2137121176e17b64cf60b9372612\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Image ID: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 13 Aug 2020 18:36:30 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-47t57 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-47t57:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-47t57\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-8081/agnhost-master-xfglk to kali-worker\n Normal Pulled 2s kubelet, kali-worker Container image \"us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\" already present on machine\n Normal Created 1s kubelet, kali-worker Created container agnhost-master\n Normal Started 1s kubelet, kali-worker Started container agnhost-master\n" Aug 13 18:36:31.628: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-8081' Aug 13 18:36:31.733: INFO: stderr: "" Aug 13 18:36:31.733: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8081\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: agnhost-master-xfglk\n" Aug 13 18:36:31.733: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-8081' Aug 13 18:36:31.829: INFO: stderr: "" Aug 13 18:36:31.829: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-8081\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.102.174.225\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.2.105:6379\nSession Affinity: None\nEvents: \n" Aug 13 18:36:31.832: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe node kali-control-plane' Aug 13 18:36:31.956: INFO: stderr: "" Aug 13 18:36:31.956: INFO: stdout: "Name: kali-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=kali-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Fri, 10 Jul 2020 10:27:46 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: kali-control-plane\n AcquireTime: \n RenewTime: Thu, 13 Aug 2020 18:36:23 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 13 Aug 2020 18:32:45 +0000 Fri, 10 Jul 2020 10:27:45 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 13 Aug 2020 18:32:45 +0000 Fri, 10 Jul 2020 10:27:45 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 13 Aug 2020 18:32:45 +0000 Fri, 10 Jul 2020 10:27:45 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 13 Aug 2020 18:32:45 +0000 Fri, 10 Jul 2020 10:28:23 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.16\n Hostname: kali-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: d83d42c4b42d4de1b3233683d9cadf95\n System UUID: e06c57c7-ce4f-4ae9-8bb6-40f1dc0e1a64\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-34-g49b0743c\n Kubelet Version: v1.18.4\n Kube-Proxy Version: v1.18.4\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-66bff467f8-qtcqs 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 34d\n kube-system coredns-66bff467f8-tjkg9 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 34d\n kube-system etcd-kali-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34d\n kube-system kindnet-zxw2f 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 34d\n kube-system kube-apiserver-kali-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 34d\n kube-system kube-controller-manager-kali-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 34d\n kube-system kube-proxy-xmqbs 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34d\n kube-system kube-scheduler-kali-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 34d\n local-path-storage local-path-provisioner-67795f75bd-clsb6 0 (0%) 0 (0%) 0 (0%) 0 (0%) 34d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\n hugepages-1Gi 0 (0%) 0 (0%)\n hugepages-2Mi 0 (0%) 0 (0%)\nEvents: \n" Aug 13 18:36:31.956: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config describe namespace kubectl-8081' Aug 13 18:36:32.448: INFO: stderr: "" Aug 13 18:36:32.448: INFO: stdout: "Name: kubectl-8081\nLabels: e2e-framework=kubectl\n e2e-run=18d1537d-cb2b-4adb-9610-a9f1e74c6290\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:36:32.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8081" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":275,"completed":103,"skipped":1629,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:36:32.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 13 18:36:33.526: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 13 18:36:35.536: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940593, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940593, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940593, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940593, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 13 18:36:37.543: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940593, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940593, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940593, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940593, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 13 18:36:40.784: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:36:41.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5334" for this suite. STEP: Destroying namespace "webhook-5334-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.085 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":275,"completed":104,"skipped":1637,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:36:41.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 13 18:36:42.245: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:42.345: INFO: Number of nodes with available pods: 0 Aug 13 18:36:42.345: INFO: Node kali-worker is running more than one daemon pod Aug 13 18:36:43.374: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:43.378: INFO: Number of nodes with available pods: 0 Aug 13 18:36:43.378: INFO: Node kali-worker is running more than one daemon pod Aug 13 18:36:44.450: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:44.454: INFO: Number of nodes with available pods: 0 Aug 13 18:36:44.454: INFO: Node kali-worker is running more than one daemon pod Aug 13 18:36:45.350: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:45.354: INFO: Number of nodes with available pods: 0 Aug 13 18:36:45.354: INFO: Node kali-worker is running more than one daemon pod Aug 13 18:36:46.350: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:46.355: INFO: Number of nodes with available pods: 2 Aug 13 18:36:46.355: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Aug 13 18:36:46.388: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:46.391: INFO: Number of nodes with available pods: 1 Aug 13 18:36:46.391: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:36:47.396: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:47.398: INFO: Number of nodes with available pods: 1 Aug 13 18:36:47.398: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:36:48.449: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:48.452: INFO: Number of nodes with available pods: 1 Aug 13 18:36:48.452: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:36:49.419: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:49.422: INFO: Number of nodes with available pods: 1 Aug 13 18:36:49.422: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:36:50.432: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:50.435: INFO: Number of nodes with available pods: 1 Aug 13 18:36:50.435: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:36:51.398: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:51.401: INFO: Number of nodes with available pods: 1 Aug 13 18:36:51.401: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:36:52.397: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:52.401: INFO: Number of nodes with available pods: 1 Aug 13 18:36:52.401: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:36:53.404: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:53.408: INFO: Number of nodes with available pods: 1 Aug 13 18:36:53.408: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:36:54.396: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:54.400: INFO: Number of nodes with available pods: 1 Aug 13 18:36:54.400: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:36:55.713: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:55.716: INFO: Number of nodes with available pods: 1 Aug 13 18:36:55.716: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:36:56.494: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:56.670: INFO: Number of nodes with available pods: 1 Aug 13 18:36:56.671: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:36:57.396: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 13 18:36:57.399: INFO: Number of nodes with available pods: 2 Aug 13 18:36:57.399: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6270, will wait for the garbage collector to delete the pods Aug 13 18:36:57.459: INFO: Deleting DaemonSet.extensions daemon-set took: 6.68673ms Aug 13 18:36:57.760: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.238057ms Aug 13 18:37:03.566: INFO: Number of nodes with available pods: 0 Aug 13 18:37:03.566: INFO: Number of running nodes: 0, number of available pods: 0 Aug 13 18:37:03.569: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6270/daemonsets","resourceVersion":"9279196"},"items":null} Aug 13 18:37:03.571: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6270/pods","resourceVersion":"9279196"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:37:03.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6270" for this suite. • [SLOW TEST:22.077 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":275,"completed":105,"skipped":1649,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:37:03.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 13 18:37:10.280: INFO: Successfully updated pod "pod-update-58598251-d1d5-4507-9935-7ec44a52935a" STEP: verifying the updated pod is in kubernetes Aug 13 18:37:10.327: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:37:10.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1437" for this suite. • [SLOW TEST:6.720 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":275,"completed":106,"skipped":1661,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:37:10.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 13 18:37:11.139: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 13 18:37:13.251: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940631, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940631, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940631, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940631, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 13 18:37:15.255: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940631, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940631, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940631, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940631, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 13 18:37:18.293: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:37:18.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2880-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:37:19.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8155" for this suite. STEP: Destroying namespace "webhook-8155-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.229 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":275,"completed":107,"skipped":1728,"failed":0} S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:37:19.567: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-092adac0-51b2-4231-861d-f82edb60af5f STEP: Creating a pod to test consume secrets Aug 13 18:37:19.868: INFO: Waiting up to 5m0s for pod "pod-secrets-56fb73bd-58d2-4696-94e4-5b0c09b4b622" in namespace "secrets-1836" to be "Succeeded or Failed" Aug 13 18:37:20.029: INFO: Pod "pod-secrets-56fb73bd-58d2-4696-94e4-5b0c09b4b622": Phase="Pending", Reason="", readiness=false. Elapsed: 161.017616ms Aug 13 18:37:22.101: INFO: Pod "pod-secrets-56fb73bd-58d2-4696-94e4-5b0c09b4b622": Phase="Pending", Reason="", readiness=false. Elapsed: 2.233077576s Aug 13 18:37:24.105: INFO: Pod "pod-secrets-56fb73bd-58d2-4696-94e4-5b0c09b4b622": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.237086109s STEP: Saw pod success Aug 13 18:37:24.105: INFO: Pod "pod-secrets-56fb73bd-58d2-4696-94e4-5b0c09b4b622" satisfied condition "Succeeded or Failed" Aug 13 18:37:24.108: INFO: Trying to get logs from node kali-worker pod pod-secrets-56fb73bd-58d2-4696-94e4-5b0c09b4b622 container secret-volume-test: STEP: delete the pod Aug 13 18:37:24.143: INFO: Waiting for pod pod-secrets-56fb73bd-58d2-4696-94e4-5b0c09b4b622 to disappear Aug 13 18:37:24.153: INFO: Pod pod-secrets-56fb73bd-58d2-4696-94e4-5b0c09b4b622 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:37:24.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1836" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":108,"skipped":1729,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:37:24.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:37:24.250: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 13 18:37:24.288: INFO: Number of nodes with available pods: 0 Aug 13 18:37:24.289: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 13 18:37:24.389: INFO: Number of nodes with available pods: 0 Aug 13 18:37:24.389: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:25.393: INFO: Number of nodes with available pods: 0 Aug 13 18:37:25.393: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:26.574: INFO: Number of nodes with available pods: 0 Aug 13 18:37:26.574: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:27.393: INFO: Number of nodes with available pods: 0 Aug 13 18:37:27.393: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:28.730: INFO: Number of nodes with available pods: 1 Aug 13 18:37:28.730: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 13 18:37:28.820: INFO: Number of nodes with available pods: 1 Aug 13 18:37:28.820: INFO: Number of running nodes: 0, number of available pods: 1 Aug 13 18:37:29.849: INFO: Number of nodes with available pods: 0 Aug 13 18:37:29.850: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 13 18:37:29.866: INFO: Number of nodes with available pods: 0 Aug 13 18:37:29.866: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:30.872: INFO: Number of nodes with available pods: 0 Aug 13 18:37:30.872: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:31.879: INFO: Number of nodes with available pods: 0 Aug 13 18:37:31.879: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:32.871: INFO: Number of nodes with available pods: 0 Aug 13 18:37:32.871: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:33.897: INFO: Number of nodes with available pods: 0 Aug 13 18:37:33.897: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:34.870: INFO: Number of nodes with available pods: 0 Aug 13 18:37:34.870: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:35.933: INFO: Number of nodes with available pods: 0 Aug 13 18:37:35.934: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:36.870: INFO: Number of nodes with available pods: 0 Aug 13 18:37:36.870: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:37.870: INFO: Number of nodes with available pods: 0 Aug 13 18:37:37.870: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:38.870: INFO: Number of nodes with available pods: 0 Aug 13 18:37:38.870: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:39.870: INFO: Number of nodes with available pods: 0 Aug 13 18:37:39.870: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:41.009: INFO: Number of nodes with available pods: 0 Aug 13 18:37:41.009: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:41.870: INFO: Number of nodes with available pods: 0 Aug 13 18:37:41.870: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:42.870: INFO: Number of nodes with available pods: 0 Aug 13 18:37:42.870: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:43.870: INFO: Number of nodes with available pods: 0 Aug 13 18:37:43.871: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:44.891: INFO: Number of nodes with available pods: 0 Aug 13 18:37:44.891: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:45.870: INFO: Number of nodes with available pods: 0 Aug 13 18:37:45.870: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:46.870: INFO: Number of nodes with available pods: 0 Aug 13 18:37:46.870: INFO: Node kali-worker2 is running more than one daemon pod Aug 13 18:37:47.870: INFO: Number of nodes with available pods: 1 Aug 13 18:37:47.870: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3318, will wait for the garbage collector to delete the pods Aug 13 18:37:47.933: INFO: Deleting DaemonSet.extensions daemon-set took: 5.847158ms Aug 13 18:37:48.233: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.293329ms Aug 13 18:38:03.338: INFO: Number of nodes with available pods: 0 Aug 13 18:38:03.338: INFO: Number of running nodes: 0, number of available pods: 0 Aug 13 18:38:03.340: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3318/daemonsets","resourceVersion":"9279573"},"items":null} Aug 13 18:38:03.342: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3318/pods","resourceVersion":"9279573"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:38:03.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3318" for this suite. • [SLOW TEST:39.252 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":275,"completed":109,"skipped":1776,"failed":0} [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:38:03.412: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0813 18:38:04.666238 7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 13 18:38:04.666: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:38:04.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3985" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":275,"completed":110,"skipped":1776,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:38:04.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name projected-configmap-test-volume-map-a3cd6dc7-0c97-4daa-9890-0f8665be2168 STEP: Creating a pod to test consume configMaps Aug 13 18:38:04.810: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-947d5d58-0859-4cb2-8cf4-232326078983" in namespace "projected-831" to be "Succeeded or Failed" Aug 13 18:38:04.825: INFO: Pod "pod-projected-configmaps-947d5d58-0859-4cb2-8cf4-232326078983": Phase="Pending", Reason="", readiness=false. Elapsed: 14.44219ms Aug 13 18:38:06.829: INFO: Pod "pod-projected-configmaps-947d5d58-0859-4cb2-8cf4-232326078983": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018746657s Aug 13 18:38:09.353: INFO: Pod "pod-projected-configmaps-947d5d58-0859-4cb2-8cf4-232326078983": Phase="Pending", Reason="", readiness=false. Elapsed: 4.54277761s Aug 13 18:38:11.401: INFO: Pod "pod-projected-configmaps-947d5d58-0859-4cb2-8cf4-232326078983": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.590319108s STEP: Saw pod success Aug 13 18:38:11.401: INFO: Pod "pod-projected-configmaps-947d5d58-0859-4cb2-8cf4-232326078983" satisfied condition "Succeeded or Failed" Aug 13 18:38:11.403: INFO: Trying to get logs from node kali-worker2 pod pod-projected-configmaps-947d5d58-0859-4cb2-8cf4-232326078983 container projected-configmap-volume-test: STEP: delete the pod Aug 13 18:38:11.564: INFO: Waiting for pod pod-projected-configmaps-947d5d58-0859-4cb2-8cf4-232326078983 to disappear Aug 13 18:38:11.766: INFO: Pod pod-projected-configmaps-947d5d58-0859-4cb2-8cf4-232326078983 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:38:11.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-831" for this suite. • [SLOW TEST:7.161 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":111,"skipped":1779,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:38:11.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-a01882d0-f7db-4627-93be-5425b502a20a STEP: Creating a pod to test consume secrets Aug 13 18:38:12.079: INFO: Waiting up to 5m0s for pod "pod-secrets-c5190347-03bd-469d-9268-f1cb2fb0a640" in namespace "secrets-3254" to be "Succeeded or Failed" Aug 13 18:38:12.213: INFO: Pod "pod-secrets-c5190347-03bd-469d-9268-f1cb2fb0a640": Phase="Pending", Reason="", readiness=false. Elapsed: 133.718192ms Aug 13 18:38:14.251: INFO: Pod "pod-secrets-c5190347-03bd-469d-9268-f1cb2fb0a640": Phase="Pending", Reason="", readiness=false. Elapsed: 2.171504234s Aug 13 18:38:16.255: INFO: Pod "pod-secrets-c5190347-03bd-469d-9268-f1cb2fb0a640": Phase="Running", Reason="", readiness=true. Elapsed: 4.175479887s Aug 13 18:38:18.275: INFO: Pod "pod-secrets-c5190347-03bd-469d-9268-f1cb2fb0a640": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.195813917s STEP: Saw pod success Aug 13 18:38:18.275: INFO: Pod "pod-secrets-c5190347-03bd-469d-9268-f1cb2fb0a640" satisfied condition "Succeeded or Failed" Aug 13 18:38:18.278: INFO: Trying to get logs from node kali-worker pod pod-secrets-c5190347-03bd-469d-9268-f1cb2fb0a640 container secret-volume-test: STEP: delete the pod Aug 13 18:38:18.320: INFO: Waiting for pod pod-secrets-c5190347-03bd-469d-9268-f1cb2fb0a640 to disappear Aug 13 18:38:18.334: INFO: Pod pod-secrets-c5190347-03bd-469d-9268-f1cb2fb0a640 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:38:18.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3254" for this suite. • [SLOW TEST:6.507 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":112,"skipped":1782,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:38:18.341: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod liveness-fece1d23-3d71-4253-8381-cd0c598178c9 in namespace container-probe-2256 Aug 13 18:38:22.463: INFO: Started pod liveness-fece1d23-3d71-4253-8381-cd0c598178c9 in namespace container-probe-2256 STEP: checking the pod's current state and verifying that restartCount is present Aug 13 18:38:22.465: INFO: Initial restart count of pod liveness-fece1d23-3d71-4253-8381-cd0c598178c9 is 0 Aug 13 18:38:38.585: INFO: Restart count of pod container-probe-2256/liveness-fece1d23-3d71-4253-8381-cd0c598178c9 is now 1 (16.120164702s elapsed) Aug 13 18:38:58.956: INFO: Restart count of pod container-probe-2256/liveness-fece1d23-3d71-4253-8381-cd0c598178c9 is now 2 (36.491066339s elapsed) Aug 13 18:39:19.224: INFO: Restart count of pod container-probe-2256/liveness-fece1d23-3d71-4253-8381-cd0c598178c9 is now 3 (56.75900708s elapsed) Aug 13 18:39:39.290: INFO: Restart count of pod container-probe-2256/liveness-fece1d23-3d71-4253-8381-cd0c598178c9 is now 4 (1m16.824532024s elapsed) Aug 13 18:40:41.574: INFO: Restart count of pod container-probe-2256/liveness-fece1d23-3d71-4253-8381-cd0c598178c9 is now 5 (2m19.109420385s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:40:41.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2256" for this suite. • [SLOW TEST:143.278 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":275,"completed":113,"skipped":1848,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:40:41.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating pod Aug 13 18:40:45.731: INFO: Pod pod-hostip-68ace38c-a160-4cae-9bf2-f91c5d599a7d has hostIP: 172.18.0.13 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:40:45.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3422" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":275,"completed":114,"skipped":1861,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:40:45.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 13 18:40:47.324: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 13 18:40:49.355: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940847, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940847, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940847, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940846, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 13 18:40:51.359: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940847, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940847, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940847, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940846, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 13 18:40:54.458: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:40:54.491: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:40:55.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2135" for this suite. STEP: Destroying namespace "webhook-2135-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:10.044 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":275,"completed":115,"skipped":1916,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:40:55.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 13 18:40:55.874: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bfd6e3da-17b3-42df-8677-fc7d49ff345b" in namespace "projected-7797" to be "Succeeded or Failed" Aug 13 18:40:55.917: INFO: Pod "downwardapi-volume-bfd6e3da-17b3-42df-8677-fc7d49ff345b": Phase="Pending", Reason="", readiness=false. Elapsed: 43.599555ms Aug 13 18:40:57.921: INFO: Pod "downwardapi-volume-bfd6e3da-17b3-42df-8677-fc7d49ff345b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04688193s Aug 13 18:40:59.924: INFO: Pod "downwardapi-volume-bfd6e3da-17b3-42df-8677-fc7d49ff345b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04977708s Aug 13 18:41:01.926: INFO: Pod "downwardapi-volume-bfd6e3da-17b3-42df-8677-fc7d49ff345b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.052291379s STEP: Saw pod success Aug 13 18:41:01.926: INFO: Pod "downwardapi-volume-bfd6e3da-17b3-42df-8677-fc7d49ff345b" satisfied condition "Succeeded or Failed" Aug 13 18:41:01.943: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-bfd6e3da-17b3-42df-8677-fc7d49ff345b container client-container: STEP: delete the pod Aug 13 18:41:02.018: INFO: Waiting for pod downwardapi-volume-bfd6e3da-17b3-42df-8677-fc7d49ff345b to disappear Aug 13 18:41:02.021: INFO: Pod downwardapi-volume-bfd6e3da-17b3-42df-8677-fc7d49ff345b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:41:02.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7797" for this suite. • [SLOW TEST:6.274 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":275,"completed":116,"skipped":1943,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:41:02.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1873.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-1873.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1873.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-1873.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-1873.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1873.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 13 18:41:10.493: INFO: DNS probes using dns-1873/dns-test-57e9dce4-ccc7-40d5-a06a-2479c3cecd29 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:41:10.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1873" for this suite. • [SLOW TEST:8.560 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":275,"completed":117,"skipped":1958,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:41:10.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1288 STEP: creating an pod Aug 13 18:41:11.209: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run logs-generator --image=us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 --namespace=kubectl-4314 -- logs-generator --log-lines-total 100 --run-duration 20s' Aug 13 18:41:11.440: INFO: stderr: "" Aug 13 18:41:11.440: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Waiting for log generator to start. Aug 13 18:41:11.440: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Aug 13 18:41:11.440: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4314" to be "running and ready, or succeeded" Aug 13 18:41:11.484: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 43.956261ms Aug 13 18:41:13.634: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194016974s Aug 13 18:41:15.911: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.471309555s Aug 13 18:41:17.948: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 6.508125998s Aug 13 18:41:17.948: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Aug 13 18:41:17.948: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Aug 13 18:41:17.948: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4314' Aug 13 18:41:18.519: INFO: stderr: "" Aug 13 18:41:18.519: INFO: stdout: "I0813 18:41:15.227674 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/ft8 388\nI0813 18:41:15.427828 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/8qx 344\nI0813 18:41:15.627821 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/zws 351\nI0813 18:41:15.827781 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/bjs 259\nI0813 18:41:16.027864 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/d27 488\nI0813 18:41:16.229705 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/5jxv 320\nI0813 18:41:16.427847 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/xrz 533\nI0813 18:41:16.627840 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/47q 561\nI0813 18:41:16.827876 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/vj7 449\nI0813 18:41:17.027937 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/jstt 514\nI0813 18:41:17.227990 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/46nv 230\nI0813 18:41:17.427866 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/59n2 541\nI0813 18:41:17.627834 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/rj4f 420\nI0813 18:41:17.827866 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/r82 549\nI0813 18:41:18.027856 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/d66 271\nI0813 18:41:18.227855 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/4bhq 241\nI0813 18:41:18.427825 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/5p77 577\n" STEP: limiting log lines Aug 13 18:41:18.519: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4314 --tail=1' Aug 13 18:41:18.718: INFO: stderr: "" Aug 13 18:41:18.718: INFO: stdout: "I0813 18:41:18.627859 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/kksq 390\n" Aug 13 18:41:18.718: INFO: got output "I0813 18:41:18.627859 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/kksq 390\n" STEP: limiting log bytes Aug 13 18:41:18.718: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4314 --limit-bytes=1' Aug 13 18:41:18.827: INFO: stderr: "" Aug 13 18:41:18.827: INFO: stdout: "I" Aug 13 18:41:18.827: INFO: got output "I" STEP: exposing timestamps Aug 13 18:41:18.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4314 --tail=1 --timestamps' Aug 13 18:41:18.937: INFO: stderr: "" Aug 13 18:41:18.937: INFO: stdout: "2020-08-13T18:41:18.827961493Z I0813 18:41:18.827804 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/spk 440\n" Aug 13 18:41:18.937: INFO: got output "2020-08-13T18:41:18.827961493Z I0813 18:41:18.827804 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/spk 440\n" STEP: restricting to a time range Aug 13 18:41:21.437: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4314 --since=1s' Aug 13 18:41:21.548: INFO: stderr: "" Aug 13 18:41:21.548: INFO: stdout: "I0813 18:41:20.627866 1 logs_generator.go:76] 27 GET /api/v1/namespaces/default/pods/576m 433\nI0813 18:41:20.827888 1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/kkq6 549\nI0813 18:41:21.027896 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/2g5 546\nI0813 18:41:21.227852 1 logs_generator.go:76] 30 POST /api/v1/namespaces/default/pods/xq8n 574\nI0813 18:41:21.427945 1 logs_generator.go:76] 31 GET /api/v1/namespaces/default/pods/jb8 472\n" Aug 13 18:41:21.549: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4314 --since=24h' Aug 13 18:41:21.692: INFO: stderr: "" Aug 13 18:41:21.692: INFO: stdout: "I0813 18:41:15.227674 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/ft8 388\nI0813 18:41:15.427828 1 logs_generator.go:76] 1 GET /api/v1/namespaces/ns/pods/8qx 344\nI0813 18:41:15.627821 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/zws 351\nI0813 18:41:15.827781 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/bjs 259\nI0813 18:41:16.027864 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/ns/pods/d27 488\nI0813 18:41:16.229705 1 logs_generator.go:76] 5 POST /api/v1/namespaces/default/pods/5jxv 320\nI0813 18:41:16.427847 1 logs_generator.go:76] 6 POST /api/v1/namespaces/kube-system/pods/xrz 533\nI0813 18:41:16.627840 1 logs_generator.go:76] 7 POST /api/v1/namespaces/default/pods/47q 561\nI0813 18:41:16.827876 1 logs_generator.go:76] 8 POST /api/v1/namespaces/ns/pods/vj7 449\nI0813 18:41:17.027937 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/kube-system/pods/jstt 514\nI0813 18:41:17.227990 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/46nv 230\nI0813 18:41:17.427866 1 logs_generator.go:76] 11 POST /api/v1/namespaces/ns/pods/59n2 541\nI0813 18:41:17.627834 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/ns/pods/rj4f 420\nI0813 18:41:17.827866 1 logs_generator.go:76] 13 GET /api/v1/namespaces/kube-system/pods/r82 549\nI0813 18:41:18.027856 1 logs_generator.go:76] 14 POST /api/v1/namespaces/ns/pods/d66 271\nI0813 18:41:18.227855 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/4bhq 241\nI0813 18:41:18.427825 1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/5p77 577\nI0813 18:41:18.627859 1 logs_generator.go:76] 17 POST /api/v1/namespaces/kube-system/pods/kksq 390\nI0813 18:41:18.827804 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/spk 440\nI0813 18:41:19.027856 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/zk5 547\nI0813 18:41:19.227821 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/6z7 342\nI0813 18:41:19.427842 1 logs_generator.go:76] 21 PUT /api/v1/namespaces/ns/pods/vx2 269\nI0813 18:41:19.627929 1 logs_generator.go:76] 22 GET /api/v1/namespaces/kube-system/pods/brn 203\nI0813 18:41:19.827792 1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/d5r 515\nI0813 18:41:20.027876 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/kube-system/pods/bhn5 355\nI0813 18:41:20.227853 1 logs_generator.go:76] 25 PUT /api/v1/namespaces/ns/pods/pq2 469\nI0813 18:41:20.427823 1 logs_generator.go:76] 26 GET /api/v1/namespaces/default/pods/cb9j 522\nI0813 18:41:20.627866 1 logs_generator.go:76] 27 GET /api/v1/namespaces/default/pods/576m 433\nI0813 18:41:20.827888 1 logs_generator.go:76] 28 POST /api/v1/namespaces/ns/pods/kkq6 549\nI0813 18:41:21.027896 1 logs_generator.go:76] 29 PUT /api/v1/namespaces/default/pods/2g5 546\nI0813 18:41:21.227852 1 logs_generator.go:76] 30 POST /api/v1/namespaces/default/pods/xq8n 574\nI0813 18:41:21.427945 1 logs_generator.go:76] 31 GET /api/v1/namespaces/default/pods/jb8 472\nI0813 18:41:21.627802 1 logs_generator.go:76] 32 POST /api/v1/namespaces/ns/pods/skkp 327\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1294 Aug 13 18:41:21.692: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4314' Aug 13 18:41:33.469: INFO: stderr: "" Aug 13 18:41:33.469: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:41:33.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4314" for this suite. • [SLOW TEST:22.856 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1284 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":275,"completed":118,"skipped":1973,"failed":0} SSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:41:33.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:41:33.663: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 13 18:41:38.667: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 13 18:41:38.667: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Aug 13 18:41:38.755: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-2707 /apis/apps/v1/namespaces/deployment-2707/deployments/test-cleanup-deployment a1f3d9b9-2709-4854-bf85-fc479f341549 9280561 1 2020-08-13 18:41:38 +0000 UTC map[name:cleanup-pod] map[] [] [] [{e2e.test Update apps/v1 2020-08-13 18:41:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033215a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} Aug 13 18:41:38.783: INFO: New ReplicaSet "test-cleanup-deployment-b4867b47f" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-b4867b47f deployment-2707 /apis/apps/v1/namespaces/deployment-2707/replicasets/test-cleanup-deployment-b4867b47f b1697b04-2a30-4108-b313-2ff96ad81a26 9280563 1 2020-08-13 18:41:38 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment a1f3d9b9-2709-4854-bf85-fc479f341549 0xc0032d97b0 0xc0032d97b1}] [] [{kube-controller-manager Update apps/v1 2020-08-13 18:41:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 49 102 51 100 57 98 57 45 50 55 48 57 45 52 56 53 52 45 98 102 56 53 45 102 99 52 55 57 102 51 52 49 53 52 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: b4867b47f,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [] [] []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032d9828 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Aug 13 18:41:38.784: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Aug 13 18:41:38.784: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-2707 /apis/apps/v1/namespaces/deployment-2707/replicasets/test-cleanup-controller a3b93f78-409e-45b9-b4c3-a28a8ec88e03 9280562 1 2020-08-13 18:41:33 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment a1f3d9b9-2709-4854-bf85-fc479f341549 0xc0032d968f 0xc0032d96a0}] [] [{e2e.test Update apps/v1 2020-08-13 18:41:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-13 18:41:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 49 102 51 100 57 98 57 45 50 55 48 57 45 52 56 53 52 45 98 102 56 53 45 102 99 52 55 57 102 51 52 49 53 52 57 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0032d9738 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} Aug 13 18:41:38.888: INFO: Pod "test-cleanup-controller-xvvng" is available: &Pod{ObjectMeta:{test-cleanup-controller-xvvng test-cleanup-controller- deployment-2707 /api/v1/namespaces/deployment-2707/pods/test-cleanup-controller-xvvng 31d3f9a9-ee2b-4959-bb4c-f2bba56e39b8 9280547 0 2020-08-13 18:41:33 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller a3b93f78-409e-45b9-b4c3-a28a8ec88e03 0xc0032d9d47 0xc0032d9d48}] [] [{kube-controller-manager Update v1 2020-08-13 18:41:33 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 97 51 98 57 51 102 55 56 45 52 48 57 101 45 52 53 98 57 45 98 52 99 51 45 97 50 56 97 56 101 99 56 56 101 48 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:41:37 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 50 48 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-g6kmb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-g6kmb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-g6kmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:41:33 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:41:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:41:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:41:33 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.120,StartTime:2020-08-13 18:41:33 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-13 18:41:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://924d5aef86534ed223dd56abda046d3cfd785f4d0acb1ca6d144bd464ba30965,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.120,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Aug 13 18:41:38.889: INFO: Pod "test-cleanup-deployment-b4867b47f-sghvn" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-b4867b47f-sghvn test-cleanup-deployment-b4867b47f- deployment-2707 /api/v1/namespaces/deployment-2707/pods/test-cleanup-deployment-b4867b47f-sghvn f875cf53-9839-43d2-a324-5f8cdbd4d037 9280569 0 2020-08-13 18:41:38 +0000 UTC map[name:cleanup-pod pod-template-hash:b4867b47f] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-b4867b47f b1697b04-2a30-4108-b313-2ff96ad81a26 0xc0032d9f00 0xc0032d9f01}] [] [{kube-controller-manager Update v1 2020-08-13 18:41:38 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 49 54 57 55 98 48 52 45 50 97 51 48 45 52 49 48 56 45 98 51 49 51 45 50 102 102 57 54 97 100 56 49 97 50 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-g6kmb,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-g6kmb,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-g6kmb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:41:38 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:41:38.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-2707" for this suite. • [SLOW TEST:5.570 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":275,"completed":119,"skipped":1983,"failed":0} SSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:41:39.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Aug 13 18:41:47.763: INFO: Successfully updated pod "adopt-release-9f996" STEP: Checking that the Job readopts the Pod Aug 13 18:41:47.763: INFO: Waiting up to 15m0s for pod "adopt-release-9f996" in namespace "job-1727" to be "adopted" Aug 13 18:41:47.795: INFO: Pod "adopt-release-9f996": Phase="Running", Reason="", readiness=true. Elapsed: 31.292129ms Aug 13 18:41:49.799: INFO: Pod "adopt-release-9f996": Phase="Running", Reason="", readiness=true. Elapsed: 2.035749548s Aug 13 18:41:49.799: INFO: Pod "adopt-release-9f996" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Aug 13 18:41:50.309: INFO: Successfully updated pod "adopt-release-9f996" STEP: Checking that the Job releases the Pod Aug 13 18:41:50.309: INFO: Waiting up to 15m0s for pod "adopt-release-9f996" in namespace "job-1727" to be "released" Aug 13 18:41:50.335: INFO: Pod "adopt-release-9f996": Phase="Running", Reason="", readiness=true. Elapsed: 26.226481ms Aug 13 18:41:52.600: INFO: Pod "adopt-release-9f996": Phase="Running", Reason="", readiness=true. Elapsed: 2.29090589s Aug 13 18:41:52.600: INFO: Pod "adopt-release-9f996" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:41:52.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1727" for this suite. • [SLOW TEST:13.562 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":275,"completed":120,"skipped":1987,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:41:52.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 13 18:41:59.769: INFO: Successfully updated pod "pod-update-activedeadlineseconds-0b9998e6-25e5-4a60-9a7a-802d1d68ae15" Aug 13 18:41:59.769: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-0b9998e6-25e5-4a60-9a7a-802d1d68ae15" in namespace "pods-338" to be "terminated due to deadline exceeded" Aug 13 18:41:59.785: INFO: Pod "pod-update-activedeadlineseconds-0b9998e6-25e5-4a60-9a7a-802d1d68ae15": Phase="Running", Reason="", readiness=true. Elapsed: 15.41166ms Aug 13 18:42:02.491: INFO: Pod "pod-update-activedeadlineseconds-0b9998e6-25e5-4a60-9a7a-802d1d68ae15": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.72155513s Aug 13 18:42:02.491: INFO: Pod "pod-update-activedeadlineseconds-0b9998e6-25e5-4a60-9a7a-802d1d68ae15" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:42:02.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-338" for this suite. • [SLOW TEST:9.920 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":275,"completed":121,"skipped":2009,"failed":0} SSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:42:02.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:42:03.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6270" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":275,"completed":122,"skipped":2012,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:42:03.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 13 18:42:03.493: INFO: Waiting up to 5m0s for pod "downwardapi-volume-962bbdfa-af41-47c3-a279-5e27911e08a5" in namespace "downward-api-6313" to be "Succeeded or Failed" Aug 13 18:42:03.528: INFO: Pod "downwardapi-volume-962bbdfa-af41-47c3-a279-5e27911e08a5": Phase="Pending", Reason="", readiness=false. Elapsed: 34.667933ms Aug 13 18:42:05.532: INFO: Pod "downwardapi-volume-962bbdfa-af41-47c3-a279-5e27911e08a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038880385s Aug 13 18:42:07.544: INFO: Pod "downwardapi-volume-962bbdfa-af41-47c3-a279-5e27911e08a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051118394s STEP: Saw pod success Aug 13 18:42:07.544: INFO: Pod "downwardapi-volume-962bbdfa-af41-47c3-a279-5e27911e08a5" satisfied condition "Succeeded or Failed" Aug 13 18:42:07.546: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-962bbdfa-af41-47c3-a279-5e27911e08a5 container client-container: STEP: delete the pod Aug 13 18:42:07.728: INFO: Waiting for pod downwardapi-volume-962bbdfa-af41-47c3-a279-5e27911e08a5 to disappear Aug 13 18:42:07.791: INFO: Pod downwardapi-volume-962bbdfa-af41-47c3-a279-5e27911e08a5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:42:07.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6313" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":123,"skipped":2015,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:42:07.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating the pod Aug 13 18:42:12.652: INFO: Successfully updated pod "annotationupdate3d74e8e7-e43f-4f10-83db-2376246c870f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:42:14.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9521" for this suite. • [SLOW TEST:6.879 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":275,"completed":124,"skipped":2037,"failed":0} SSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:42:14.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2895.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-2895.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2895.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-2895.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-2895.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2895.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 13 18:42:23.292: INFO: DNS probes using dns-2895/dns-test-07b0e2e9-9c70-4707-be6d-8a8a6358fc60 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:42:24.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2895" for this suite. • [SLOW TEST:10.452 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":275,"completed":125,"skipped":2046,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:42:25.133: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating configMap with name configmap-test-volume-f58d982e-bd01-47a2-9cf6-7a95ca8b6c9f STEP: Creating a pod to test consume configMaps Aug 13 18:42:25.817: INFO: Waiting up to 5m0s for pod "pod-configmaps-1ab97bd4-f5dd-46d6-990d-22e1a218e9f1" in namespace "configmap-9126" to be "Succeeded or Failed" Aug 13 18:42:25.899: INFO: Pod "pod-configmaps-1ab97bd4-f5dd-46d6-990d-22e1a218e9f1": Phase="Pending", Reason="", readiness=false. Elapsed: 81.571871ms Aug 13 18:42:27.904: INFO: Pod "pod-configmaps-1ab97bd4-f5dd-46d6-990d-22e1a218e9f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085980424s Aug 13 18:42:29.908: INFO: Pod "pod-configmaps-1ab97bd4-f5dd-46d6-990d-22e1a218e9f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090583417s Aug 13 18:42:31.929: INFO: Pod "pod-configmaps-1ab97bd4-f5dd-46d6-990d-22e1a218e9f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111379327s STEP: Saw pod success Aug 13 18:42:31.929: INFO: Pod "pod-configmaps-1ab97bd4-f5dd-46d6-990d-22e1a218e9f1" satisfied condition "Succeeded or Failed" Aug 13 18:42:31.932: INFO: Trying to get logs from node kali-worker pod pod-configmaps-1ab97bd4-f5dd-46d6-990d-22e1a218e9f1 container configmap-volume-test: STEP: delete the pod Aug 13 18:42:31.947: INFO: Waiting for pod pod-configmaps-1ab97bd4-f5dd-46d6-990d-22e1a218e9f1 to disappear Aug 13 18:42:31.953: INFO: Pod pod-configmaps-1ab97bd4-f5dd-46d6-990d-22e1a218e9f1 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:42:31.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9126" for this suite. • [SLOW TEST:6.826 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":126,"skipped":2046,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:42:31.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating pod pod-subpath-test-projected-pcjh STEP: Creating a pod to test atomic-volume-subpath Aug 13 18:42:32.087: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-pcjh" in namespace "subpath-5199" to be "Succeeded or Failed" Aug 13 18:42:32.119: INFO: Pod "pod-subpath-test-projected-pcjh": Phase="Pending", Reason="", readiness=false. Elapsed: 32.550196ms Aug 13 18:42:34.211: INFO: Pod "pod-subpath-test-projected-pcjh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.124013128s Aug 13 18:42:36.214: INFO: Pod "pod-subpath-test-projected-pcjh": Phase="Running", Reason="", readiness=true. Elapsed: 4.126737412s Aug 13 18:42:38.217: INFO: Pod "pod-subpath-test-projected-pcjh": Phase="Running", Reason="", readiness=true. Elapsed: 6.130549461s Aug 13 18:42:40.222: INFO: Pod "pod-subpath-test-projected-pcjh": Phase="Running", Reason="", readiness=true. Elapsed: 8.134778258s Aug 13 18:42:42.225: INFO: Pod "pod-subpath-test-projected-pcjh": Phase="Running", Reason="", readiness=true. Elapsed: 10.138546635s Aug 13 18:42:44.230: INFO: Pod "pod-subpath-test-projected-pcjh": Phase="Running", Reason="", readiness=true. Elapsed: 12.14267453s Aug 13 18:42:46.265: INFO: Pod "pod-subpath-test-projected-pcjh": Phase="Running", Reason="", readiness=true. Elapsed: 14.177824061s Aug 13 18:42:48.269: INFO: Pod "pod-subpath-test-projected-pcjh": Phase="Running", Reason="", readiness=true. Elapsed: 16.182208629s Aug 13 18:42:50.273: INFO: Pod "pod-subpath-test-projected-pcjh": Phase="Running", Reason="", readiness=true. Elapsed: 18.186210806s Aug 13 18:42:52.278: INFO: Pod "pod-subpath-test-projected-pcjh": Phase="Running", Reason="", readiness=true. Elapsed: 20.191181812s Aug 13 18:42:54.283: INFO: Pod "pod-subpath-test-projected-pcjh": Phase="Running", Reason="", readiness=true. Elapsed: 22.195939797s Aug 13 18:42:56.287: INFO: Pod "pod-subpath-test-projected-pcjh": Phase="Running", Reason="", readiness=true. Elapsed: 24.2002444s Aug 13 18:42:58.290: INFO: Pod "pod-subpath-test-projected-pcjh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.203627345s STEP: Saw pod success Aug 13 18:42:58.291: INFO: Pod "pod-subpath-test-projected-pcjh" satisfied condition "Succeeded or Failed" Aug 13 18:42:58.293: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-projected-pcjh container test-container-subpath-projected-pcjh: STEP: delete the pod Aug 13 18:42:58.347: INFO: Waiting for pod pod-subpath-test-projected-pcjh to disappear Aug 13 18:42:58.383: INFO: Pod pod-subpath-test-projected-pcjh no longer exists STEP: Deleting pod pod-subpath-test-projected-pcjh Aug 13 18:42:58.384: INFO: Deleting pod "pod-subpath-test-projected-pcjh" in namespace "subpath-5199" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:42:58.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5199" for this suite. • [SLOW TEST:26.473 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":275,"completed":127,"skipped":2084,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:42:58.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test downward API volume plugin Aug 13 18:42:58.596: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b10a275f-6f23-4c29-a6eb-85964a965fad" in namespace "projected-1188" to be "Succeeded or Failed" Aug 13 18:42:58.660: INFO: Pod "downwardapi-volume-b10a275f-6f23-4c29-a6eb-85964a965fad": Phase="Pending", Reason="", readiness=false. Elapsed: 63.692411ms Aug 13 18:43:00.780: INFO: Pod "downwardapi-volume-b10a275f-6f23-4c29-a6eb-85964a965fad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183907966s Aug 13 18:43:02.785: INFO: Pod "downwardapi-volume-b10a275f-6f23-4c29-a6eb-85964a965fad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.189492948s STEP: Saw pod success Aug 13 18:43:02.785: INFO: Pod "downwardapi-volume-b10a275f-6f23-4c29-a6eb-85964a965fad" satisfied condition "Succeeded or Failed" Aug 13 18:43:02.788: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-b10a275f-6f23-4c29-a6eb-85964a965fad container client-container: STEP: delete the pod Aug 13 18:43:03.088: INFO: Waiting for pod downwardapi-volume-b10a275f-6f23-4c29-a6eb-85964a965fad to disappear Aug 13 18:43:03.093: INFO: Pod downwardapi-volume-b10a275f-6f23-4c29-a6eb-85964a965fad no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:43:03.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1188" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":128,"skipped":2111,"failed":0} SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:43:03.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test substitution in container's command Aug 13 18:43:03.469: INFO: Waiting up to 5m0s for pod "var-expansion-16ffedbf-2c55-4226-adc8-33c095d7c6d3" in namespace "var-expansion-5332" to be "Succeeded or Failed" Aug 13 18:43:03.475: INFO: Pod "var-expansion-16ffedbf-2c55-4226-adc8-33c095d7c6d3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.685092ms Aug 13 18:43:05.479: INFO: Pod "var-expansion-16ffedbf-2c55-4226-adc8-33c095d7c6d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010657171s Aug 13 18:43:07.504: INFO: Pod "var-expansion-16ffedbf-2c55-4226-adc8-33c095d7c6d3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035143827s Aug 13 18:43:09.507: INFO: Pod "var-expansion-16ffedbf-2c55-4226-adc8-33c095d7c6d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038278553s STEP: Saw pod success Aug 13 18:43:09.507: INFO: Pod "var-expansion-16ffedbf-2c55-4226-adc8-33c095d7c6d3" satisfied condition "Succeeded or Failed" Aug 13 18:43:09.509: INFO: Trying to get logs from node kali-worker pod var-expansion-16ffedbf-2c55-4226-adc8-33c095d7c6d3 container dapi-container: STEP: delete the pod Aug 13 18:43:09.555: INFO: Waiting for pod var-expansion-16ffedbf-2c55-4226-adc8-33c095d7c6d3 to disappear Aug 13 18:43:09.607: INFO: Pod var-expansion-16ffedbf-2c55-4226-adc8-33c095d7c6d3 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:43:09.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5332" for this suite. • [SLOW TEST:6.531 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":275,"completed":129,"skipped":2114,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:43:09.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 13 18:43:09.816: INFO: Waiting up to 5m0s for pod "pod-19dcff2b-b648-472d-a1a4-07506e6e3a98" in namespace "emptydir-3505" to be "Succeeded or Failed" Aug 13 18:43:09.828: INFO: Pod "pod-19dcff2b-b648-472d-a1a4-07506e6e3a98": Phase="Pending", Reason="", readiness=false. Elapsed: 11.315789ms Aug 13 18:43:12.037: INFO: Pod "pod-19dcff2b-b648-472d-a1a4-07506e6e3a98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220814429s Aug 13 18:43:14.041: INFO: Pod "pod-19dcff2b-b648-472d-a1a4-07506e6e3a98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.224688336s STEP: Saw pod success Aug 13 18:43:14.041: INFO: Pod "pod-19dcff2b-b648-472d-a1a4-07506e6e3a98" satisfied condition "Succeeded or Failed" Aug 13 18:43:14.043: INFO: Trying to get logs from node kali-worker pod pod-19dcff2b-b648-472d-a1a4-07506e6e3a98 container test-container: STEP: delete the pod Aug 13 18:43:14.100: INFO: Waiting for pod pod-19dcff2b-b648-472d-a1a4-07506e6e3a98 to disappear Aug 13 18:43:14.150: INFO: Pod pod-19dcff2b-b648-472d-a1a4-07506e6e3a98 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:43:14.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3505" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":130,"skipped":2126,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:43:14.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Aug 13 18:43:14.750: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Aug 13 18:43:16.761: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940994, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940994, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940994, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940994, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 13 18:43:18.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940994, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940994, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940994, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732940994, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Aug 13 18:43:21.800: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Aug 13 18:43:25.851: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config attach --namespace=webhook-7355 to-be-attached-pod -i -c=container1' Aug 13 18:43:28.789: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:43:28.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7355" for this suite. STEP: Destroying namespace "webhook-7355-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:14.849 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":275,"completed":131,"skipped":2148,"failed":0} [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:43:29.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 STEP: Creating secret with name secret-test-map-03ccfc00-93e0-41d2-b321-c3b188caedc3 STEP: Creating a pod to test consume secrets Aug 13 18:43:29.104: INFO: Waiting up to 5m0s for pod "pod-secrets-7e93ec91-d567-4f72-9e9e-e1ee1e347308" in namespace "secrets-6306" to be "Succeeded or Failed" Aug 13 18:43:29.177: INFO: Pod "pod-secrets-7e93ec91-d567-4f72-9e9e-e1ee1e347308": Phase="Pending", Reason="", readiness=false. Elapsed: 72.762512ms Aug 13 18:43:31.181: INFO: Pod "pod-secrets-7e93ec91-d567-4f72-9e9e-e1ee1e347308": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077257312s Aug 13 18:43:33.184: INFO: Pod "pod-secrets-7e93ec91-d567-4f72-9e9e-e1ee1e347308": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080672842s Aug 13 18:43:35.211: INFO: Pod "pod-secrets-7e93ec91-d567-4f72-9e9e-e1ee1e347308": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.10693788s STEP: Saw pod success Aug 13 18:43:35.211: INFO: Pod "pod-secrets-7e93ec91-d567-4f72-9e9e-e1ee1e347308" satisfied condition "Succeeded or Failed" Aug 13 18:43:35.214: INFO: Trying to get logs from node kali-worker pod pod-secrets-7e93ec91-d567-4f72-9e9e-e1ee1e347308 container secret-volume-test: STEP: delete the pod Aug 13 18:43:35.422: INFO: Waiting for pod pod-secrets-7e93ec91-d567-4f72-9e9e-e1ee1e347308 to disappear Aug 13 18:43:35.464: INFO: Pod pod-secrets-7e93ec91-d567-4f72-9e9e-e1ee1e347308 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179 Aug 13 18:43:35.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6306" for this suite. • [SLOW TEST:6.468 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":132,"skipped":2148,"failed":0} SSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178 STEP: Creating a kubernetes client Aug 13 18:43:35.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703 Aug 13 18:43:35.736: INFO: (0) /api/v1/nodes/kali-worker2:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6696
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-6696
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6696
Aug 13 18:43:36.013: INFO: Found 0 stateful pods, waiting for 1
Aug 13 18:43:46.049: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 13 18:43:46.052: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6696 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 13 18:43:46.396: INFO: stderr: "I0813 18:43:46.239348    1550 log.go:172] (0xc000a98000) (0xc000a3e0a0) Create stream\nI0813 18:43:46.239414    1550 log.go:172] (0xc000a98000) (0xc000a3e0a0) Stream added, broadcasting: 1\nI0813 18:43:46.242138    1550 log.go:172] (0xc000a98000) Reply frame received for 1\nI0813 18:43:46.242185    1550 log.go:172] (0xc000a98000) (0xc000b48320) Create stream\nI0813 18:43:46.242209    1550 log.go:172] (0xc000a98000) (0xc000b48320) Stream added, broadcasting: 3\nI0813 18:43:46.243308    1550 log.go:172] (0xc000a98000) Reply frame received for 3\nI0813 18:43:46.243358    1550 log.go:172] (0xc000a98000) (0xc000a3e140) Create stream\nI0813 18:43:46.243371    1550 log.go:172] (0xc000a98000) (0xc000a3e140) Stream added, broadcasting: 5\nI0813 18:43:46.244160    1550 log.go:172] (0xc000a98000) Reply frame received for 5\nI0813 18:43:46.312063    1550 log.go:172] (0xc000a98000) Data frame received for 5\nI0813 18:43:46.312087    1550 log.go:172] (0xc000a3e140) (5) Data frame handling\nI0813 18:43:46.312107    1550 log.go:172] (0xc000a3e140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0813 18:43:46.387628    1550 log.go:172] (0xc000a98000) Data frame received for 3\nI0813 18:43:46.387651    1550 log.go:172] (0xc000b48320) (3) Data frame handling\nI0813 18:43:46.387671    1550 log.go:172] (0xc000b48320) (3) Data frame sent\nI0813 18:43:46.387960    1550 log.go:172] (0xc000a98000) Data frame received for 3\nI0813 18:43:46.387979    1550 log.go:172] (0xc000b48320) (3) Data frame handling\nI0813 18:43:46.387999    1550 log.go:172] (0xc000a98000) Data frame received for 5\nI0813 18:43:46.388010    1550 log.go:172] (0xc000a3e140) (5) Data frame handling\nI0813 18:43:46.389869    1550 log.go:172] (0xc000a98000) Data frame received for 1\nI0813 18:43:46.389898    1550 log.go:172] (0xc000a3e0a0) (1) Data frame handling\nI0813 18:43:46.389921    1550 log.go:172] (0xc000a3e0a0) (1) Data frame sent\nI0813 18:43:46.389944    1550 log.go:172] (0xc000a98000) (0xc000a3e0a0) Stream removed, broadcasting: 1\nI0813 18:43:46.390074    1550 log.go:172] (0xc000a98000) Go away received\nI0813 18:43:46.390273    1550 log.go:172] (0xc000a98000) (0xc000a3e0a0) Stream removed, broadcasting: 1\nI0813 18:43:46.390290    1550 log.go:172] (0xc000a98000) (0xc000b48320) Stream removed, broadcasting: 3\nI0813 18:43:46.390298    1550 log.go:172] (0xc000a98000) (0xc000a3e140) Stream removed, broadcasting: 5\n"
Aug 13 18:43:46.396: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 13 18:43:46.396: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 13 18:43:46.400: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 13 18:43:56.405: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 13 18:43:56.405: INFO: Waiting for statefulset status.replicas updated to 0
Aug 13 18:43:56.484: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999357s
Aug 13 18:43:57.559: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.930381723s
Aug 13 18:43:58.563: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.855736326s
Aug 13 18:43:59.568: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.850981332s
Aug 13 18:44:00.573: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.846066421s
Aug 13 18:44:01.577: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.841544567s
Aug 13 18:44:02.582: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.836898111s
Aug 13 18:44:03.589: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.832512814s
Aug 13 18:44:04.600: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.82566567s
Aug 13 18:44:05.604: INFO: Verifying statefulset ss doesn't scale past 1 for another 814.282608ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6696
Aug 13 18:44:06.608: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6696 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 13 18:44:06.823: INFO: stderr: "I0813 18:44:06.732047    1569 log.go:172] (0xc000b14420) (0xc0006b37c0) Create stream\nI0813 18:44:06.732109    1569 log.go:172] (0xc000b14420) (0xc0006b37c0) Stream added, broadcasting: 1\nI0813 18:44:06.737576    1569 log.go:172] (0xc000b14420) Reply frame received for 1\nI0813 18:44:06.737614    1569 log.go:172] (0xc000b14420) (0xc0003d6aa0) Create stream\nI0813 18:44:06.737625    1569 log.go:172] (0xc000b14420) (0xc0003d6aa0) Stream added, broadcasting: 3\nI0813 18:44:06.738238    1569 log.go:172] (0xc000b14420) Reply frame received for 3\nI0813 18:44:06.738258    1569 log.go:172] (0xc000b14420) (0xc0006b3860) Create stream\nI0813 18:44:06.738264    1569 log.go:172] (0xc000b14420) (0xc0006b3860) Stream added, broadcasting: 5\nI0813 18:44:06.738757    1569 log.go:172] (0xc000b14420) Reply frame received for 5\nI0813 18:44:06.817175    1569 log.go:172] (0xc000b14420) Data frame received for 5\nI0813 18:44:06.817209    1569 log.go:172] (0xc000b14420) Data frame received for 3\nI0813 18:44:06.817239    1569 log.go:172] (0xc0003d6aa0) (3) Data frame handling\nI0813 18:44:06.817252    1569 log.go:172] (0xc0003d6aa0) (3) Data frame sent\nI0813 18:44:06.817269    1569 log.go:172] (0xc0006b3860) (5) Data frame handling\nI0813 18:44:06.817277    1569 log.go:172] (0xc0006b3860) (5) Data frame sent\nI0813 18:44:06.817288    1569 log.go:172] (0xc000b14420) Data frame received for 5\nI0813 18:44:06.817300    1569 log.go:172] (0xc0006b3860) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0813 18:44:06.817566    1569 log.go:172] (0xc000b14420) Data frame received for 3\nI0813 18:44:06.817577    1569 log.go:172] (0xc0003d6aa0) (3) Data frame handling\nI0813 18:44:06.818512    1569 log.go:172] (0xc000b14420) Data frame received for 1\nI0813 18:44:06.818526    1569 log.go:172] (0xc0006b37c0) (1) Data frame handling\nI0813 18:44:06.818536    1569 log.go:172] (0xc0006b37c0) (1) Data frame sent\nI0813 18:44:06.818723    1569 log.go:172] (0xc000b14420) (0xc0006b37c0) Stream removed, broadcasting: 1\nI0813 18:44:06.818893    1569 log.go:172] (0xc000b14420) Go away received\nI0813 18:44:06.819100    1569 log.go:172] (0xc000b14420) (0xc0006b37c0) Stream removed, broadcasting: 1\nI0813 18:44:06.819123    1569 log.go:172] (0xc000b14420) (0xc0003d6aa0) Stream removed, broadcasting: 3\nI0813 18:44:06.819141    1569 log.go:172] (0xc000b14420) (0xc0006b3860) Stream removed, broadcasting: 5\n"
Aug 13 18:44:06.823: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 13 18:44:06.823: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 13 18:44:06.826: INFO: Found 1 stateful pods, waiting for 3
Aug 13 18:44:16.830: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 13 18:44:16.830: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 13 18:44:16.830: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 13 18:44:16.835: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6696 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 13 18:44:17.126: INFO: stderr: "I0813 18:44:17.026604    1590 log.go:172] (0xc00003ac60) (0xc0006ab5e0) Create stream\nI0813 18:44:17.026676    1590 log.go:172] (0xc00003ac60) (0xc0006ab5e0) Stream added, broadcasting: 1\nI0813 18:44:17.028858    1590 log.go:172] (0xc00003ac60) Reply frame received for 1\nI0813 18:44:17.028910    1590 log.go:172] (0xc00003ac60) (0xc0009a8000) Create stream\nI0813 18:44:17.028922    1590 log.go:172] (0xc00003ac60) (0xc0009a8000) Stream added, broadcasting: 3\nI0813 18:44:17.029887    1590 log.go:172] (0xc00003ac60) Reply frame received for 3\nI0813 18:44:17.029936    1590 log.go:172] (0xc00003ac60) (0xc0009a80a0) Create stream\nI0813 18:44:17.029952    1590 log.go:172] (0xc00003ac60) (0xc0009a80a0) Stream added, broadcasting: 5\nI0813 18:44:17.030831    1590 log.go:172] (0xc00003ac60) Reply frame received for 5\nI0813 18:44:17.117784    1590 log.go:172] (0xc00003ac60) Data frame received for 3\nI0813 18:44:17.117809    1590 log.go:172] (0xc0009a8000) (3) Data frame handling\nI0813 18:44:17.117826    1590 log.go:172] (0xc0009a8000) (3) Data frame sent\nI0813 18:44:17.117866    1590 log.go:172] (0xc00003ac60) Data frame received for 5\nI0813 18:44:17.117881    1590 log.go:172] (0xc0009a80a0) (5) Data frame handling\nI0813 18:44:17.117910    1590 log.go:172] (0xc0009a80a0) (5) Data frame sent\nI0813 18:44:17.117929    1590 log.go:172] (0xc00003ac60) Data frame received for 5\nI0813 18:44:17.117948    1590 log.go:172] (0xc0009a80a0) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0813 18:44:17.117967    1590 log.go:172] (0xc00003ac60) Data frame received for 3\nI0813 18:44:17.118011    1590 log.go:172] (0xc0009a8000) (3) Data frame handling\nI0813 18:44:17.119129    1590 log.go:172] (0xc00003ac60) Data frame received for 1\nI0813 18:44:17.119148    1590 log.go:172] (0xc0006ab5e0) (1) Data frame handling\nI0813 18:44:17.119155    1590 log.go:172] (0xc0006ab5e0) (1) Data frame sent\nI0813 18:44:17.119162    1590 log.go:172] (0xc00003ac60) (0xc0006ab5e0) Stream removed, broadcasting: 1\nI0813 18:44:17.119393    1590 log.go:172] (0xc00003ac60) (0xc0006ab5e0) Stream removed, broadcasting: 1\nI0813 18:44:17.119406    1590 log.go:172] (0xc00003ac60) (0xc0009a8000) Stream removed, broadcasting: 3\nI0813 18:44:17.119413    1590 log.go:172] (0xc00003ac60) (0xc0009a80a0) Stream removed, broadcasting: 5\n"
Aug 13 18:44:17.126: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 13 18:44:17.126: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 13 18:44:17.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6696 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 13 18:44:17.392: INFO: stderr: "I0813 18:44:17.247107    1611 log.go:172] (0xc000790840) (0xc0007b2280) Create stream\nI0813 18:44:17.247157    1611 log.go:172] (0xc000790840) (0xc0007b2280) Stream added, broadcasting: 1\nI0813 18:44:17.251128    1611 log.go:172] (0xc000790840) Reply frame received for 1\nI0813 18:44:17.251213    1611 log.go:172] (0xc000790840) (0xc0005b5040) Create stream\nI0813 18:44:17.251232    1611 log.go:172] (0xc000790840) (0xc0005b5040) Stream added, broadcasting: 3\nI0813 18:44:17.254050    1611 log.go:172] (0xc000790840) Reply frame received for 3\nI0813 18:44:17.254091    1611 log.go:172] (0xc000790840) (0xc0005b5220) Create stream\nI0813 18:44:17.254109    1611 log.go:172] (0xc000790840) (0xc0005b5220) Stream added, broadcasting: 5\nI0813 18:44:17.254875    1611 log.go:172] (0xc000790840) Reply frame received for 5\nI0813 18:44:17.321334    1611 log.go:172] (0xc000790840) Data frame received for 5\nI0813 18:44:17.321356    1611 log.go:172] (0xc0005b5220) (5) Data frame handling\nI0813 18:44:17.321375    1611 log.go:172] (0xc0005b5220) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0813 18:44:17.383091    1611 log.go:172] (0xc000790840) Data frame received for 3\nI0813 18:44:17.383130    1611 log.go:172] (0xc0005b5040) (3) Data frame handling\nI0813 18:44:17.383156    1611 log.go:172] (0xc0005b5040) (3) Data frame sent\nI0813 18:44:17.383180    1611 log.go:172] (0xc000790840) Data frame received for 3\nI0813 18:44:17.383193    1611 log.go:172] (0xc0005b5040) (3) Data frame handling\nI0813 18:44:17.383285    1611 log.go:172] (0xc000790840) Data frame received for 5\nI0813 18:44:17.383303    1611 log.go:172] (0xc0005b5220) (5) Data frame handling\nI0813 18:44:17.385003    1611 log.go:172] (0xc000790840) Data frame received for 1\nI0813 18:44:17.385024    1611 log.go:172] (0xc0007b2280) (1) Data frame handling\nI0813 18:44:17.385056    1611 log.go:172] (0xc0007b2280) (1) Data frame sent\nI0813 18:44:17.385105    1611 log.go:172] (0xc000790840) (0xc0007b2280) Stream removed, broadcasting: 1\nI0813 18:44:17.385119    1611 log.go:172] (0xc000790840) Go away received\nI0813 18:44:17.385609    1611 log.go:172] (0xc000790840) (0xc0007b2280) Stream removed, broadcasting: 1\nI0813 18:44:17.385638    1611 log.go:172] (0xc000790840) (0xc0005b5040) Stream removed, broadcasting: 3\nI0813 18:44:17.385657    1611 log.go:172] (0xc000790840) (0xc0005b5220) Stream removed, broadcasting: 5\n"
Aug 13 18:44:17.392: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 13 18:44:17.392: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 13 18:44:17.392: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6696 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 13 18:44:17.671: INFO: stderr: "I0813 18:44:17.571048    1632 log.go:172] (0xc000b59810) (0xc000964960) Create stream\nI0813 18:44:17.571111    1632 log.go:172] (0xc000b59810) (0xc000964960) Stream added, broadcasting: 1\nI0813 18:44:17.573562    1632 log.go:172] (0xc000b59810) Reply frame received for 1\nI0813 18:44:17.573607    1632 log.go:172] (0xc000b59810) (0xc0009860a0) Create stream\nI0813 18:44:17.573620    1632 log.go:172] (0xc000b59810) (0xc0009860a0) Stream added, broadcasting: 3\nI0813 18:44:17.574444    1632 log.go:172] (0xc000b59810) Reply frame received for 3\nI0813 18:44:17.574489    1632 log.go:172] (0xc000b59810) (0xc000986140) Create stream\nI0813 18:44:17.574514    1632 log.go:172] (0xc000b59810) (0xc000986140) Stream added, broadcasting: 5\nI0813 18:44:17.575313    1632 log.go:172] (0xc000b59810) Reply frame received for 5\nI0813 18:44:17.630624    1632 log.go:172] (0xc000b59810) Data frame received for 5\nI0813 18:44:17.630655    1632 log.go:172] (0xc000986140) (5) Data frame handling\nI0813 18:44:17.630680    1632 log.go:172] (0xc000986140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0813 18:44:17.660988    1632 log.go:172] (0xc000b59810) Data frame received for 3\nI0813 18:44:17.661031    1632 log.go:172] (0xc0009860a0) (3) Data frame handling\nI0813 18:44:17.661066    1632 log.go:172] (0xc0009860a0) (3) Data frame sent\nI0813 18:44:17.661210    1632 log.go:172] (0xc000b59810) Data frame received for 5\nI0813 18:44:17.661242    1632 log.go:172] (0xc000986140) (5) Data frame handling\nI0813 18:44:17.661269    1632 log.go:172] (0xc000b59810) Data frame received for 3\nI0813 18:44:17.661284    1632 log.go:172] (0xc0009860a0) (3) Data frame handling\nI0813 18:44:17.663214    1632 log.go:172] (0xc000b59810) Data frame received for 1\nI0813 18:44:17.663263    1632 log.go:172] (0xc000964960) (1) Data frame handling\nI0813 18:44:17.663304    1632 log.go:172] (0xc000964960) (1) Data frame sent\nI0813 18:44:17.663331    1632 log.go:172] (0xc000b59810) (0xc000964960) Stream removed, broadcasting: 1\nI0813 18:44:17.663350    1632 log.go:172] (0xc000b59810) Go away received\nI0813 18:44:17.663827    1632 log.go:172] (0xc000b59810) (0xc000964960) Stream removed, broadcasting: 1\nI0813 18:44:17.663852    1632 log.go:172] (0xc000b59810) (0xc0009860a0) Stream removed, broadcasting: 3\nI0813 18:44:17.663864    1632 log.go:172] (0xc000b59810) (0xc000986140) Stream removed, broadcasting: 5\n"
Aug 13 18:44:17.671: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 13 18:44:17.671: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 13 18:44:17.671: INFO: Waiting for statefulset status.replicas updated to 0
Aug 13 18:44:17.693: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Aug 13 18:44:27.703: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 13 18:44:27.703: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 13 18:44:27.703: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 13 18:44:27.720: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999753s
Aug 13 18:44:28.725: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.98856217s
Aug 13 18:44:29.729: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.983581027s
Aug 13 18:44:30.734: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.979156823s
Aug 13 18:44:31.740: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.973930069s
Aug 13 18:44:32.745: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.968635798s
Aug 13 18:44:33.751: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.963348829s
Aug 13 18:44:34.756: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.957661019s
Aug 13 18:44:35.761: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.9525951s
Aug 13 18:44:36.767: INFO: Verifying statefulset ss doesn't scale past 3 for another 946.77662ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6696
Aug 13 18:44:37.787: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6696 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 13 18:44:38.164: INFO: stderr: "I0813 18:44:38.056186    1652 log.go:172] (0xc0009a93f0) (0xc00095c780) Create stream\nI0813 18:44:38.056234    1652 log.go:172] (0xc0009a93f0) (0xc00095c780) Stream added, broadcasting: 1\nI0813 18:44:38.061070    1652 log.go:172] (0xc0009a93f0) Reply frame received for 1\nI0813 18:44:38.061127    1652 log.go:172] (0xc0009a93f0) (0xc0005c75e0) Create stream\nI0813 18:44:38.061146    1652 log.go:172] (0xc0009a93f0) (0xc0005c75e0) Stream added, broadcasting: 3\nI0813 18:44:38.062247    1652 log.go:172] (0xc0009a93f0) Reply frame received for 3\nI0813 18:44:38.062310    1652 log.go:172] (0xc0009a93f0) (0xc000016a00) Create stream\nI0813 18:44:38.062339    1652 log.go:172] (0xc0009a93f0) (0xc000016a00) Stream added, broadcasting: 5\nI0813 18:44:38.063280    1652 log.go:172] (0xc0009a93f0) Reply frame received for 5\nI0813 18:44:38.153597    1652 log.go:172] (0xc0009a93f0) Data frame received for 3\nI0813 18:44:38.153644    1652 log.go:172] (0xc0005c75e0) (3) Data frame handling\nI0813 18:44:38.153684    1652 log.go:172] (0xc0005c75e0) (3) Data frame sent\nI0813 18:44:38.153710    1652 log.go:172] (0xc0009a93f0) Data frame received for 3\nI0813 18:44:38.153728    1652 log.go:172] (0xc0005c75e0) (3) Data frame handling\nI0813 18:44:38.154226    1652 log.go:172] (0xc0009a93f0) Data frame received for 5\nI0813 18:44:38.154257    1652 log.go:172] (0xc000016a00) (5) Data frame handling\nI0813 18:44:38.154277    1652 log.go:172] (0xc000016a00) (5) Data frame sent\nI0813 18:44:38.154289    1652 log.go:172] (0xc0009a93f0) Data frame received for 5\nI0813 18:44:38.154300    1652 log.go:172] (0xc000016a00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0813 18:44:38.155618    1652 log.go:172] (0xc0009a93f0) Data frame received for 1\nI0813 18:44:38.155643    1652 log.go:172] (0xc00095c780) (1) Data frame handling\nI0813 18:44:38.155657    1652 log.go:172] (0xc00095c780) (1) Data frame sent\nI0813 18:44:38.155674    1652 log.go:172] (0xc0009a93f0) (0xc00095c780) Stream removed, broadcasting: 1\nI0813 18:44:38.155777    1652 log.go:172] (0xc0009a93f0) Go away received\nI0813 18:44:38.156091    1652 log.go:172] (0xc0009a93f0) (0xc00095c780) Stream removed, broadcasting: 1\nI0813 18:44:38.156111    1652 log.go:172] (0xc0009a93f0) (0xc0005c75e0) Stream removed, broadcasting: 3\nI0813 18:44:38.156122    1652 log.go:172] (0xc0009a93f0) (0xc000016a00) Stream removed, broadcasting: 5\n"
Aug 13 18:44:38.164: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 13 18:44:38.164: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 13 18:44:38.164: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6696 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 13 18:44:38.377: INFO: stderr: "I0813 18:44:38.298745    1675 log.go:172] (0xc00003afd0) (0xc000223040) Create stream\nI0813 18:44:38.298799    1675 log.go:172] (0xc00003afd0) (0xc000223040) Stream added, broadcasting: 1\nI0813 18:44:38.301329    1675 log.go:172] (0xc00003afd0) Reply frame received for 1\nI0813 18:44:38.301395    1675 log.go:172] (0xc00003afd0) (0xc0006c1900) Create stream\nI0813 18:44:38.301414    1675 log.go:172] (0xc00003afd0) (0xc0006c1900) Stream added, broadcasting: 3\nI0813 18:44:38.302438    1675 log.go:172] (0xc00003afd0) Reply frame received for 3\nI0813 18:44:38.302471    1675 log.go:172] (0xc00003afd0) (0xc000936000) Create stream\nI0813 18:44:38.302492    1675 log.go:172] (0xc00003afd0) (0xc000936000) Stream added, broadcasting: 5\nI0813 18:44:38.303690    1675 log.go:172] (0xc00003afd0) Reply frame received for 5\nI0813 18:44:38.369994    1675 log.go:172] (0xc00003afd0) Data frame received for 3\nI0813 18:44:38.370052    1675 log.go:172] (0xc0006c1900) (3) Data frame handling\nI0813 18:44:38.370076    1675 log.go:172] (0xc0006c1900) (3) Data frame sent\nI0813 18:44:38.370115    1675 log.go:172] (0xc00003afd0) Data frame received for 5\nI0813 18:44:38.370132    1675 log.go:172] (0xc000936000) (5) Data frame handling\nI0813 18:44:38.370156    1675 log.go:172] (0xc000936000) (5) Data frame sent\nI0813 18:44:38.370174    1675 log.go:172] (0xc00003afd0) Data frame received for 5\nI0813 18:44:38.370190    1675 log.go:172] (0xc000936000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0813 18:44:38.370442    1675 log.go:172] (0xc00003afd0) Data frame received for 3\nI0813 18:44:38.370469    1675 log.go:172] (0xc0006c1900) (3) Data frame handling\nI0813 18:44:38.371579    1675 log.go:172] (0xc00003afd0) Data frame received for 1\nI0813 18:44:38.371596    1675 log.go:172] (0xc000223040) (1) Data frame handling\nI0813 18:44:38.371614    1675 log.go:172] (0xc000223040) (1) Data frame sent\nI0813 18:44:38.371625    1675 log.go:172] (0xc00003afd0) (0xc000223040) Stream removed, broadcasting: 1\nI0813 18:44:38.371804    1675 log.go:172] (0xc00003afd0) Go away received\nI0813 18:44:38.371992    1675 log.go:172] (0xc00003afd0) (0xc000223040) Stream removed, broadcasting: 1\nI0813 18:44:38.372011    1675 log.go:172] (0xc00003afd0) (0xc0006c1900) Stream removed, broadcasting: 3\nI0813 18:44:38.372020    1675 log.go:172] (0xc00003afd0) (0xc000936000) Stream removed, broadcasting: 5\n"
Aug 13 18:44:38.377: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 13 18:44:38.377: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 13 18:44:38.377: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-6696 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 13 18:44:38.620: INFO: stderr: "I0813 18:44:38.533256    1699 log.go:172] (0xc0007bc4d0) (0xc0006675e0) Create stream\nI0813 18:44:38.533323    1699 log.go:172] (0xc0007bc4d0) (0xc0006675e0) Stream added, broadcasting: 1\nI0813 18:44:38.536031    1699 log.go:172] (0xc0007bc4d0) Reply frame received for 1\nI0813 18:44:38.536096    1699 log.go:172] (0xc0007bc4d0) (0xc0007de0a0) Create stream\nI0813 18:44:38.536124    1699 log.go:172] (0xc0007bc4d0) (0xc0007de0a0) Stream added, broadcasting: 3\nI0813 18:44:38.537303    1699 log.go:172] (0xc0007bc4d0) Reply frame received for 3\nI0813 18:44:38.537356    1699 log.go:172] (0xc0007bc4d0) (0xc000667680) Create stream\nI0813 18:44:38.537374    1699 log.go:172] (0xc0007bc4d0) (0xc000667680) Stream added, broadcasting: 5\nI0813 18:44:38.538256    1699 log.go:172] (0xc0007bc4d0) Reply frame received for 5\nI0813 18:44:38.613080    1699 log.go:172] (0xc0007bc4d0) Data frame received for 3\nI0813 18:44:38.613124    1699 log.go:172] (0xc0007de0a0) (3) Data frame handling\nI0813 18:44:38.613143    1699 log.go:172] (0xc0007de0a0) (3) Data frame sent\nI0813 18:44:38.613171    1699 log.go:172] (0xc0007bc4d0) Data frame received for 3\nI0813 18:44:38.613185    1699 log.go:172] (0xc0007de0a0) (3) Data frame handling\nI0813 18:44:38.613222    1699 log.go:172] (0xc0007bc4d0) Data frame received for 5\nI0813 18:44:38.613254    1699 log.go:172] (0xc000667680) (5) Data frame handling\nI0813 18:44:38.613281    1699 log.go:172] (0xc000667680) (5) Data frame sent\nI0813 18:44:38.613299    1699 log.go:172] (0xc0007bc4d0) Data frame received for 5\nI0813 18:44:38.613313    1699 log.go:172] (0xc000667680) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0813 18:44:38.615561    1699 log.go:172] (0xc0007bc4d0) Data frame received for 1\nI0813 18:44:38.615577    1699 log.go:172] (0xc0006675e0) (1) Data frame handling\nI0813 18:44:38.615597    1699 log.go:172] (0xc0006675e0) (1) Data frame sent\nI0813 18:44:38.615624    1699 log.go:172] (0xc0007bc4d0) (0xc0006675e0) Stream removed, broadcasting: 1\nI0813 18:44:38.615840    1699 log.go:172] (0xc0007bc4d0) Go away received\nI0813 18:44:38.615944    1699 log.go:172] (0xc0007bc4d0) (0xc0006675e0) Stream removed, broadcasting: 1\nI0813 18:44:38.615960    1699 log.go:172] (0xc0007bc4d0) (0xc0007de0a0) Stream removed, broadcasting: 3\nI0813 18:44:38.615970    1699 log.go:172] (0xc0007bc4d0) (0xc000667680) Stream removed, broadcasting: 5\n"
Aug 13 18:44:38.620: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 13 18:44:38.620: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 13 18:44:38.620: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 13 18:45:08.638: INFO: Deleting all statefulset in ns statefulset-6696
Aug 13 18:45:08.640: INFO: Scaling statefulset ss to 0
Aug 13 18:45:08.649: INFO: Waiting for statefulset status.replicas updated to 0
Aug 13 18:45:08.651: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:45:08.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6696" for this suite.

• [SLOW TEST:92.823 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":275,"completed":134,"skipped":2205,"failed":0}
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:45:08.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 13 18:45:08.759: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d60d7b92-1963-47f9-9874-f561d9378375" in namespace "projected-3359" to be "Succeeded or Failed"
Aug 13 18:45:08.795: INFO: Pod "downwardapi-volume-d60d7b92-1963-47f9-9874-f561d9378375": Phase="Pending", Reason="", readiness=false. Elapsed: 36.050142ms
Aug 13 18:45:10.816: INFO: Pod "downwardapi-volume-d60d7b92-1963-47f9-9874-f561d9378375": Phase="Pending", Reason="", readiness=false. Elapsed: 2.057621378s
Aug 13 18:45:12.820: INFO: Pod "downwardapi-volume-d60d7b92-1963-47f9-9874-f561d9378375": Phase="Running", Reason="", readiness=true. Elapsed: 4.061626115s
Aug 13 18:45:14.824: INFO: Pod "downwardapi-volume-d60d7b92-1963-47f9-9874-f561d9378375": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.065314057s
STEP: Saw pod success
Aug 13 18:45:14.824: INFO: Pod "downwardapi-volume-d60d7b92-1963-47f9-9874-f561d9378375" satisfied condition "Succeeded or Failed"
Aug 13 18:45:14.827: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-d60d7b92-1963-47f9-9874-f561d9378375 container client-container: 
STEP: delete the pod
Aug 13 18:45:14.887: INFO: Waiting for pod downwardapi-volume-d60d7b92-1963-47f9-9874-f561d9378375 to disappear
Aug 13 18:45:14.954: INFO: Pod downwardapi-volume-d60d7b92-1963-47f9-9874-f561d9378375 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:45:14.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3359" for this suite.

• [SLOW TEST:6.332 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":275,"completed":135,"skipped":2205,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:45:15.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 13 18:45:15.145: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 13 18:45:15.173: INFO: Waiting for terminating namespaces to be deleted...
Aug 13 18:45:15.176: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 13 18:45:15.182: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 13 18:45:15.182: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 13 18:45:15.182: INFO: rally-466602a1-db17uwyh-z26cp from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:59:13 +0000 UTC (1 container statuses recorded)
Aug 13 18:45:15.182: INFO: 	Container rally-466602a1-db17uwyh ready: false, restart count 0
Aug 13 18:45:15.182: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Aug 13 18:45:15.182: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 13 18:45:15.182: INFO: rally-466602a1-db17uwyh-6xgdb from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:51:36 +0000 UTC (1 container statuses recorded)
Aug 13 18:45:15.183: INFO: 	Container rally-466602a1-db17uwyh ready: false, restart count 0
Aug 13 18:45:15.183: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded)
Aug 13 18:45:15.183: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug 13 18:45:15.183: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Aug 13 18:45:15.183: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 13 18:45:15.183: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 13 18:45:15.202: INFO: rally-6c5ea4be-96nyoha6-75976ff4d6-kqnxh from c-rally-6c5ea4be-pyo3sp3v started at 2020-08-11 18:16:03 +0000 UTC (1 container statuses recorded)
Aug 13 18:45:15.202: INFO: 	Container rally-6c5ea4be-96nyoha6 ready: true, restart count 52
Aug 13 18:45:15.202: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 13 18:45:15.202: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 13 18:45:15.202: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 13 18:45:15.202: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 13 18:45:15.202: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 13 18:45:15.202: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 13 18:45:15.202: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded)
Aug 13 18:45:15.202: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug 13 18:45:15.202: INFO: rally-7104017d-j5l4uv4e-0 from c-rally-7104017d-2oejvhl7 started at 2020-08-11 18:51:39 +0000 UTC (1 container statuses recorded)
Aug 13 18:45:15.202: INFO: 	Container rally-7104017d-j5l4uv4e ready: true, restart count 1
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-02226b45-44d3-49c6-aeb9-66ea28fe62b4 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-02226b45-44d3-49c6-aeb9-66ea28fe62b4 off the node kali-worker
STEP: verifying the node doesn't have the label kubernetes.io/e2e-02226b45-44d3-49c6-aeb9-66ea28fe62b4
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:50:23.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6919" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:308.657 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":275,"completed":136,"skipped":2231,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:50:23.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 18:50:23.810: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:50:24.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8827" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":275,"completed":137,"skipped":2234,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:50:24.851: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 18:50:24.946: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 13 18:50:29.998: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 13 18:50:29.998: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 13 18:50:32.202: INFO: Creating deployment "test-rollover-deployment"
Aug 13 18:50:32.428: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 13 18:50:34.437: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 13 18:50:34.444: INFO: Ensure that both replica sets have 1 created replica
Aug 13 18:50:34.448: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 13 18:50:34.453: INFO: Updating deployment test-rollover-deployment
Aug 13 18:50:34.453: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 13 18:50:36.595: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 13 18:50:36.601: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 13 18:50:36.678: INFO: all replica sets need to contain the pod-template-hash label
Aug 13 18:50:36.678: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941435, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 18:50:38.710: INFO: all replica sets need to contain the pod-template-hash label
Aug 13 18:50:38.710: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941438, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 18:50:40.687: INFO: all replica sets need to contain the pod-template-hash label
Aug 13 18:50:40.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941438, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 18:50:42.685: INFO: all replica sets need to contain the pod-template-hash label
Aug 13 18:50:42.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941438, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 18:50:44.687: INFO: all replica sets need to contain the pod-template-hash label
Aug 13 18:50:44.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941438, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 18:50:46.687: INFO: all replica sets need to contain the pod-template-hash label
Aug 13 18:50:46.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941438, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 18:50:48.744: INFO: 
Aug 13 18:50:48.744: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941448, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732941432, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-84f7f6f64b\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 18:50:50.685: INFO: 
Aug 13 18:50:50.686: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 13 18:50:50.692: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-3276 /apis/apps/v1/namespaces/deployment-3276/deployments/test-rollover-deployment 4595e2af-19fd-4ae3-bf85-3a600f908dc7 9282995 2 2020-08-13 18:50:32 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-13 18:50:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-13 18:50:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032a67c8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-13 18:50:32 +0000 UTC,LastTransitionTime:2020-08-13 18:50:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-84f7f6f64b" has successfully progressed.,LastUpdateTime:2020-08-13 18:50:48 +0000 UTC,LastTransitionTime:2020-08-13 18:50:32 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 13 18:50:50.695: INFO: New ReplicaSet "test-rollover-deployment-84f7f6f64b" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-84f7f6f64b  deployment-3276 /apis/apps/v1/namespaces/deployment-3276/replicasets/test-rollover-deployment-84f7f6f64b 84d5309f-cbae-48a9-a605-6b270e1810d3 9282984 2 2020-08-13 18:50:34 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 4595e2af-19fd-4ae3-bf85-3a600f908dc7 0xc0033fa9d7 0xc0033fa9d8}] []  [{kube-controller-manager Update apps/v1 2020-08-13 18:50:48 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 53 57 53 101 50 97 102 45 49 57 102 100 45 52 97 101 51 45 98 102 56 53 45 51 97 54 48 48 102 57 48 56 100 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 84f7f6f64b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033faa68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 13 18:50:50.695: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 13 18:50:50.695: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-3276 /apis/apps/v1/namespaces/deployment-3276/replicasets/test-rollover-controller 2d421126-7c57-4b09-a7fb-a6b2e885913f 9282994 2 2020-08-13 18:50:24 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 4595e2af-19fd-4ae3-bf85-3a600f908dc7 0xc0033fa7bf 0xc0033fa7d0}] []  [{e2e.test Update apps/v1 2020-08-13 18:50:24 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-13 18:50:48 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 53 57 53 101 50 97 102 45 49 57 102 100 45 52 97 101 51 45 98 102 56 53 45 51 97 54 48 48 102 57 48 56 100 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0033fa868  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 13 18:50:50.695: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-5686c4cfd5  deployment-3276 /apis/apps/v1/namespaces/deployment-3276/replicasets/test-rollover-deployment-5686c4cfd5 9bdf3cc1-6e24-43e3-b4cd-95909845b79d 9282934 2 2020-08-13 18:50:32 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 4595e2af-19fd-4ae3-bf85-3a600f908dc7 0xc0033fa8d7 0xc0033fa8d8}] []  [{kube-controller-manager Update apps/v1 2020-08-13 18:50:35 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 52 53 57 53 101 50 97 102 45 49 57 102 100 45 52 97 101 51 45 98 102 56 53 45 51 97 54 48 48 102 57 48 56 100 99 55 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 109 105 110 82 101 97 100 121 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 114 101 100 105 115 45 115 108 97 118 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 5686c4cfd5,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:5686c4cfd5] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0033fa968  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 13 18:50:50.698: INFO: Pod "test-rollover-deployment-84f7f6f64b-htb8t" is available:
&Pod{ObjectMeta:{test-rollover-deployment-84f7f6f64b-htb8t test-rollover-deployment-84f7f6f64b- deployment-3276 /api/v1/namespaces/deployment-3276/pods/test-rollover-deployment-84f7f6f64b-htb8t 1f548ed2-b4c9-4672-9ad8-987994406306 9282952 0 2020-08-13 18:50:34 +0000 UTC   map[name:rollover-pod pod-template-hash:84f7f6f64b] map[] [{apps/v1 ReplicaSet test-rollover-deployment-84f7f6f64b 84d5309f-cbae-48a9-a605-6b270e1810d3 0xc0032d8627 0xc0032d8628}] []  [{kube-controller-manager Update v1 2020-08-13 18:50:34 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 56 52 100 53 51 48 57 102 45 99 98 97 101 45 52 56 97 57 45 97 54 48 53 45 54 98 50 55 48 101 49 56 49 48 100 51 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 18:50:38 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 49 52 50 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-gkdmz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-gkdmz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-gkdmz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:50:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:50:38 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 18:50:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.142,StartTime:2020-08-13 18:50:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-13 18:50:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://d8ba98b0c4ef9645986d6fd024f563690e35aa203e8d8ae3ed6448e4a2aba500,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.142,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:50:50.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3276" for this suite.

• [SLOW TEST:25.854 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":275,"completed":138,"skipped":2265,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:50:50.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 13 18:50:50.787: INFO: Waiting up to 5m0s for pod "pod-86382467-787e-4999-bf9e-47637b77522d" in namespace "emptydir-1235" to be "Succeeded or Failed"
Aug 13 18:50:50.807: INFO: Pod "pod-86382467-787e-4999-bf9e-47637b77522d": Phase="Pending", Reason="", readiness=false. Elapsed: 20.23164ms
Aug 13 18:50:52.812: INFO: Pod "pod-86382467-787e-4999-bf9e-47637b77522d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024875085s
Aug 13 18:50:54.816: INFO: Pod "pod-86382467-787e-4999-bf9e-47637b77522d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028705731s
STEP: Saw pod success
Aug 13 18:50:54.816: INFO: Pod "pod-86382467-787e-4999-bf9e-47637b77522d" satisfied condition "Succeeded or Failed"
Aug 13 18:50:54.818: INFO: Trying to get logs from node kali-worker pod pod-86382467-787e-4999-bf9e-47637b77522d container test-container: 
STEP: delete the pod
Aug 13 18:50:55.173: INFO: Waiting for pod pod-86382467-787e-4999-bf9e-47637b77522d to disappear
Aug 13 18:50:55.180: INFO: Pod pod-86382467-787e-4999-bf9e-47637b77522d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:50:55.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1235" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":139,"skipped":2322,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:50:55.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-9702
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 13 18:50:55.262: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 13 18:50:55.343: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 13 18:50:57.442: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 13 18:50:59.348: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 13 18:51:01.347: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 18:51:03.348: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 18:51:05.346: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 18:51:07.347: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 18:51:09.373: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 13 18:51:09.378: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 13 18:51:11.382: INFO: The status of Pod netserver-1 is Running (Ready = false)
Aug 13 18:51:13.676: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 13 18:51:20.230: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.144:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9702 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 18:51:20.230: INFO: >>> kubeConfig: /root/.kube/config
I0813 18:51:20.264294       7 log.go:172] (0xc00282c0b0) (0xc001fab2c0) Create stream
I0813 18:51:20.264320       7 log.go:172] (0xc00282c0b0) (0xc001fab2c0) Stream added, broadcasting: 1
I0813 18:51:20.266168       7 log.go:172] (0xc00282c0b0) Reply frame received for 1
I0813 18:51:20.266206       7 log.go:172] (0xc00282c0b0) (0xc0024700a0) Create stream
I0813 18:51:20.266222       7 log.go:172] (0xc00282c0b0) (0xc0024700a0) Stream added, broadcasting: 3
I0813 18:51:20.266814       7 log.go:172] (0xc00282c0b0) Reply frame received for 3
I0813 18:51:20.266842       7 log.go:172] (0xc00282c0b0) (0xc002470140) Create stream
I0813 18:51:20.266855       7 log.go:172] (0xc00282c0b0) (0xc002470140) Stream added, broadcasting: 5
I0813 18:51:20.267492       7 log.go:172] (0xc00282c0b0) Reply frame received for 5
I0813 18:51:20.333183       7 log.go:172] (0xc00282c0b0) Data frame received for 5
I0813 18:51:20.333221       7 log.go:172] (0xc002470140) (5) Data frame handling
I0813 18:51:20.333254       7 log.go:172] (0xc00282c0b0) Data frame received for 3
I0813 18:51:20.333277       7 log.go:172] (0xc0024700a0) (3) Data frame handling
I0813 18:51:20.333304       7 log.go:172] (0xc0024700a0) (3) Data frame sent
I0813 18:51:20.333614       7 log.go:172] (0xc00282c0b0) Data frame received for 3
I0813 18:51:20.333655       7 log.go:172] (0xc0024700a0) (3) Data frame handling
I0813 18:51:20.335072       7 log.go:172] (0xc00282c0b0) Data frame received for 1
I0813 18:51:20.335096       7 log.go:172] (0xc001fab2c0) (1) Data frame handling
I0813 18:51:20.335120       7 log.go:172] (0xc001fab2c0) (1) Data frame sent
I0813 18:51:20.335136       7 log.go:172] (0xc00282c0b0) (0xc001fab2c0) Stream removed, broadcasting: 1
I0813 18:51:20.335217       7 log.go:172] (0xc00282c0b0) (0xc001fab2c0) Stream removed, broadcasting: 1
I0813 18:51:20.335231       7 log.go:172] (0xc00282c0b0) (0xc0024700a0) Stream removed, broadcasting: 3
I0813 18:51:20.335304       7 log.go:172] (0xc00282c0b0) Go away received
I0813 18:51:20.335352       7 log.go:172] (0xc00282c0b0) (0xc002470140) Stream removed, broadcasting: 5
Aug 13 18:51:20.335: INFO: Found all expected endpoints: [netserver-0]
Aug 13 18:51:20.338: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.83:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9702 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 18:51:20.338: INFO: >>> kubeConfig: /root/.kube/config
I0813 18:51:20.369809       7 log.go:172] (0xc00282c840) (0xc001fab860) Create stream
I0813 18:51:20.369837       7 log.go:172] (0xc00282c840) (0xc001fab860) Stream added, broadcasting: 1
I0813 18:51:20.371834       7 log.go:172] (0xc00282c840) Reply frame received for 1
I0813 18:51:20.371881       7 log.go:172] (0xc00282c840) (0xc0010e0140) Create stream
I0813 18:51:20.371896       7 log.go:172] (0xc00282c840) (0xc0010e0140) Stream added, broadcasting: 3
I0813 18:51:20.373025       7 log.go:172] (0xc00282c840) Reply frame received for 3
I0813 18:51:20.373084       7 log.go:172] (0xc00282c840) (0xc001fab900) Create stream
I0813 18:51:20.373098       7 log.go:172] (0xc00282c840) (0xc001fab900) Stream added, broadcasting: 5
I0813 18:51:20.373974       7 log.go:172] (0xc00282c840) Reply frame received for 5
I0813 18:51:20.439493       7 log.go:172] (0xc00282c840) Data frame received for 3
I0813 18:51:20.439548       7 log.go:172] (0xc0010e0140) (3) Data frame handling
I0813 18:51:20.439584       7 log.go:172] (0xc0010e0140) (3) Data frame sent
I0813 18:51:20.439604       7 log.go:172] (0xc00282c840) Data frame received for 3
I0813 18:51:20.439624       7 log.go:172] (0xc0010e0140) (3) Data frame handling
I0813 18:51:20.439735       7 log.go:172] (0xc00282c840) Data frame received for 5
I0813 18:51:20.439769       7 log.go:172] (0xc001fab900) (5) Data frame handling
I0813 18:51:20.450252       7 log.go:172] (0xc00282c840) Data frame received for 1
I0813 18:51:20.450281       7 log.go:172] (0xc001fab860) (1) Data frame handling
I0813 18:51:20.450302       7 log.go:172] (0xc001fab860) (1) Data frame sent
I0813 18:51:20.450322       7 log.go:172] (0xc00282c840) (0xc001fab860) Stream removed, broadcasting: 1
I0813 18:51:20.450342       7 log.go:172] (0xc00282c840) Go away received
I0813 18:51:20.450503       7 log.go:172] (0xc00282c840) (0xc001fab860) Stream removed, broadcasting: 1
I0813 18:51:20.450528       7 log.go:172] (0xc00282c840) (0xc0010e0140) Stream removed, broadcasting: 3
I0813 18:51:20.450544       7 log.go:172] (0xc00282c840) (0xc001fab900) Stream removed, broadcasting: 5
Aug 13 18:51:20.450: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:51:20.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9702" for this suite.

• [SLOW TEST:25.267 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":140,"skipped":2336,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:51:20.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-bbb04a12-50f3-4c17-a816-a32ab0b7a446
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-bbb04a12-50f3-4c17-a816-a32ab0b7a446
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:51:26.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7157" for this suite.

• [SLOW TEST:6.305 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":141,"skipped":2371,"failed":0}
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:51:26.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 13 18:51:27.381: INFO: Waiting up to 5m0s for pod "pod-fe8fdfb3-90ef-4e41-963b-541f9f80b086" in namespace "emptydir-8387" to be "Succeeded or Failed"
Aug 13 18:51:27.471: INFO: Pod "pod-fe8fdfb3-90ef-4e41-963b-541f9f80b086": Phase="Pending", Reason="", readiness=false. Elapsed: 89.98407ms
Aug 13 18:51:29.507: INFO: Pod "pod-fe8fdfb3-90ef-4e41-963b-541f9f80b086": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125539892s
Aug 13 18:51:31.512: INFO: Pod "pod-fe8fdfb3-90ef-4e41-963b-541f9f80b086": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130638822s
Aug 13 18:51:33.693: INFO: Pod "pod-fe8fdfb3-90ef-4e41-963b-541f9f80b086": Phase="Running", Reason="", readiness=true. Elapsed: 6.311596164s
Aug 13 18:51:35.697: INFO: Pod "pod-fe8fdfb3-90ef-4e41-963b-541f9f80b086": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.31538096s
STEP: Saw pod success
Aug 13 18:51:35.697: INFO: Pod "pod-fe8fdfb3-90ef-4e41-963b-541f9f80b086" satisfied condition "Succeeded or Failed"
Aug 13 18:51:35.699: INFO: Trying to get logs from node kali-worker pod pod-fe8fdfb3-90ef-4e41-963b-541f9f80b086 container test-container: 
STEP: delete the pod
Aug 13 18:51:36.066: INFO: Waiting for pod pod-fe8fdfb3-90ef-4e41-963b-541f9f80b086 to disappear
Aug 13 18:51:36.092: INFO: Pod pod-fe8fdfb3-90ef-4e41-963b-541f9f80b086 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:51:36.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8387" for this suite.

• [SLOW TEST:9.339 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":142,"skipped":2378,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:51:36.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 13 18:51:37.040: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0398e358-1f9b-4166-bd29-e598c9f3ba81" in namespace "projected-6719" to be "Succeeded or Failed"
Aug 13 18:51:37.044: INFO: Pod "downwardapi-volume-0398e358-1f9b-4166-bd29-e598c9f3ba81": Phase="Pending", Reason="", readiness=false. Elapsed: 3.283644ms
Aug 13 18:51:39.049: INFO: Pod "downwardapi-volume-0398e358-1f9b-4166-bd29-e598c9f3ba81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00817733s
Aug 13 18:51:41.052: INFO: Pod "downwardapi-volume-0398e358-1f9b-4166-bd29-e598c9f3ba81": Phase="Running", Reason="", readiness=true. Elapsed: 4.011838048s
Aug 13 18:51:43.057: INFO: Pod "downwardapi-volume-0398e358-1f9b-4166-bd29-e598c9f3ba81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015999984s
STEP: Saw pod success
Aug 13 18:51:43.057: INFO: Pod "downwardapi-volume-0398e358-1f9b-4166-bd29-e598c9f3ba81" satisfied condition "Succeeded or Failed"
Aug 13 18:51:43.059: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-0398e358-1f9b-4166-bd29-e598c9f3ba81 container client-container: 
STEP: delete the pod
Aug 13 18:51:43.081: INFO: Waiting for pod downwardapi-volume-0398e358-1f9b-4166-bd29-e598c9f3ba81 to disappear
Aug 13 18:51:43.086: INFO: Pod downwardapi-volume-0398e358-1f9b-4166-bd29-e598c9f3ba81 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:51:43.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6719" for this suite.

• [SLOW TEST:6.991 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":275,"completed":143,"skipped":2403,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:51:43.093: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-6134.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-6134.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6134.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-6134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-6134.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-6134.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-6134.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6134.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 13 18:51:51.310: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:51.315: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:51.319: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:51.322: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:51.330: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:51.333: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:51.336: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:51.338: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:51.343: INFO: Lookups using dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6134.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6134.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local jessie_udp@dns-test-service-2.dns-6134.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6134.svc.cluster.local]

Aug 13 18:51:56.351: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:56.353: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:56.356: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:56.358: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:56.366: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:56.369: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:56.372: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:56.374: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:51:56.387: INFO: Lookups using dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6134.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6134.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local jessie_udp@dns-test-service-2.dns-6134.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6134.svc.cluster.local]

Aug 13 18:52:01.347: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:01.349: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:01.352: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:01.354: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:01.362: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:01.364: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:01.367: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:01.369: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:01.374: INFO: Lookups using dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6134.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6134.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local jessie_udp@dns-test-service-2.dns-6134.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6134.svc.cluster.local]

Aug 13 18:52:06.349: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:06.352: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:06.355: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:06.357: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:06.365: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:06.367: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:06.369: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:06.372: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:06.393: INFO: Lookups using dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6134.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6134.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local jessie_udp@dns-test-service-2.dns-6134.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6134.svc.cluster.local]

Aug 13 18:52:11.347: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:11.350: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:11.468: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:11.524: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:11.531: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:11.533: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:11.535: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:11.537: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:11.542: INFO: Lookups using dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6134.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6134.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local jessie_udp@dns-test-service-2.dns-6134.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6134.svc.cluster.local]

Aug 13 18:52:16.497: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:16.499: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:16.502: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:16.504: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:16.531: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:16.533: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:16.536: INFO: Unable to read jessie_udp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:16.539: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-6134.svc.cluster.local from pod dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424: the server could not find the requested resource (get pods dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424)
Aug 13 18:52:16.545: INFO: Lookups using dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local wheezy_udp@dns-test-service-2.dns-6134.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-6134.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-6134.svc.cluster.local jessie_udp@dns-test-service-2.dns-6134.svc.cluster.local jessie_tcp@dns-test-service-2.dns-6134.svc.cluster.local]

Aug 13 18:52:21.961: INFO: DNS probes using dns-6134/dns-test-f74f4d90-3d15-4eb9-901f-04a51f755424 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:52:23.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6134" for this suite.

• [SLOW TEST:40.098 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":275,"completed":144,"skipped":2420,"failed":0}
SSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:52:23.191: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 13 18:52:23.738: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 13 18:52:35.033: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:52:35.036: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9979" for this suite.

• [SLOW TEST:11.851 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":275,"completed":145,"skipped":2424,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:52:35.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-6e068681-9adc-4317-a4df-07a8f9e86bc5
STEP: Creating a pod to test consume configMaps
Aug 13 18:52:35.171: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8441f4d0-5015-41bd-a119-92276df33667" in namespace "projected-4589" to be "Succeeded or Failed"
Aug 13 18:52:35.177: INFO: Pod "pod-projected-configmaps-8441f4d0-5015-41bd-a119-92276df33667": Phase="Pending", Reason="", readiness=false. Elapsed: 5.537894ms
Aug 13 18:52:37.181: INFO: Pod "pod-projected-configmaps-8441f4d0-5015-41bd-a119-92276df33667": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009247779s
Aug 13 18:52:39.185: INFO: Pod "pod-projected-configmaps-8441f4d0-5015-41bd-a119-92276df33667": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01375286s
Aug 13 18:52:41.281: INFO: Pod "pod-projected-configmaps-8441f4d0-5015-41bd-a119-92276df33667": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.109356375s
STEP: Saw pod success
Aug 13 18:52:41.281: INFO: Pod "pod-projected-configmaps-8441f4d0-5015-41bd-a119-92276df33667" satisfied condition "Succeeded or Failed"
Aug 13 18:52:41.284: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-8441f4d0-5015-41bd-a119-92276df33667 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 13 18:52:42.124: INFO: Waiting for pod pod-projected-configmaps-8441f4d0-5015-41bd-a119-92276df33667 to disappear
Aug 13 18:52:42.226: INFO: Pod pod-projected-configmaps-8441f4d0-5015-41bd-a119-92276df33667 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:52:42.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4589" for this suite.

• [SLOW TEST:7.205 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":275,"completed":146,"skipped":2435,"failed":0}
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:52:42.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Aug 13 18:52:43.510: INFO: >>> kubeConfig: /root/.kube/config
Aug 13 18:52:46.831: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:52:58.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5943" for this suite.

• [SLOW TEST:15.797 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":275,"completed":147,"skipped":2435,"failed":0}
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:52:58.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-8611
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-8611
STEP: Creating statefulset with conflicting port in namespace statefulset-8611
STEP: Waiting until pod test-pod will start running in namespace statefulset-8611
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-8611
Aug 13 18:53:04.373: INFO: Observed stateful pod in namespace: statefulset-8611, name: ss-0, uid: 6178ed5f-17b2-427f-b794-3a18f4c955a7, status phase: Pending. Waiting for statefulset controller to delete.
Aug 13 18:53:04.376: INFO: Observed stateful pod in namespace: statefulset-8611, name: ss-0, uid: 6178ed5f-17b2-427f-b794-3a18f4c955a7, status phase: Failed. Waiting for statefulset controller to delete.
Aug 13 18:53:04.453: INFO: Observed stateful pod in namespace: statefulset-8611, name: ss-0, uid: 6178ed5f-17b2-427f-b794-3a18f4c955a7, status phase: Failed. Waiting for statefulset controller to delete.
Aug 13 18:53:04.565: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-8611
STEP: Removing pod with conflicting port in namespace statefulset-8611
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-8611 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 13 18:53:10.827: INFO: Deleting all statefulset in ns statefulset-8611
Aug 13 18:53:10.831: INFO: Scaling statefulset ss to 0
Aug 13 18:53:30.902: INFO: Waiting for statefulset status.replicas updated to 0
Aug 13 18:53:30.906: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:53:31.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8611" for this suite.

• [SLOW TEST:33.776 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":275,"completed":148,"skipped":2441,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:53:31.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-4af6fc25-e8e6-471a-a929-3229742a7d7b
STEP: Creating a pod to test consume secrets
Aug 13 18:53:33.975: INFO: Waiting up to 5m0s for pod "pod-secrets-62284e8a-0934-4971-9fc5-67c8b41701f9" in namespace "secrets-3629" to be "Succeeded or Failed"
Aug 13 18:53:34.252: INFO: Pod "pod-secrets-62284e8a-0934-4971-9fc5-67c8b41701f9": Phase="Pending", Reason="", readiness=false. Elapsed: 276.6033ms
Aug 13 18:53:36.256: INFO: Pod "pod-secrets-62284e8a-0934-4971-9fc5-67c8b41701f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280211794s
Aug 13 18:53:38.443: INFO: Pod "pod-secrets-62284e8a-0934-4971-9fc5-67c8b41701f9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.467630643s
Aug 13 18:53:40.670: INFO: Pod "pod-secrets-62284e8a-0934-4971-9fc5-67c8b41701f9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.694883952s
Aug 13 18:53:42.801: INFO: Pod "pod-secrets-62284e8a-0934-4971-9fc5-67c8b41701f9": Phase="Running", Reason="", readiness=true. Elapsed: 8.825711248s
Aug 13 18:53:44.831: INFO: Pod "pod-secrets-62284e8a-0934-4971-9fc5-67c8b41701f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.85571035s
STEP: Saw pod success
Aug 13 18:53:44.831: INFO: Pod "pod-secrets-62284e8a-0934-4971-9fc5-67c8b41701f9" satisfied condition "Succeeded or Failed"
Aug 13 18:53:44.833: INFO: Trying to get logs from node kali-worker pod pod-secrets-62284e8a-0934-4971-9fc5-67c8b41701f9 container secret-volume-test: 
STEP: delete the pod
Aug 13 18:53:44.893: INFO: Waiting for pod pod-secrets-62284e8a-0934-4971-9fc5-67c8b41701f9 to disappear
Aug 13 18:53:44.916: INFO: Pod pod-secrets-62284e8a-0934-4971-9fc5-67c8b41701f9 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:53:44.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3629" for this suite.
STEP: Destroying namespace "secret-namespace-7930" for this suite.

• [SLOW TEST:13.203 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":275,"completed":149,"skipped":2450,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:53:45.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-downwardapi-rtd6
STEP: Creating a pod to test atomic-volume-subpath
Aug 13 18:53:45.200: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-rtd6" in namespace "subpath-5949" to be "Succeeded or Failed"
Aug 13 18:53:45.203: INFO: Pod "pod-subpath-test-downwardapi-rtd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.893054ms
Aug 13 18:53:47.208: INFO: Pod "pod-subpath-test-downwardapi-rtd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007416538s
Aug 13 18:53:49.256: INFO: Pod "pod-subpath-test-downwardapi-rtd6": Phase="Running", Reason="", readiness=true. Elapsed: 4.056303042s
Aug 13 18:53:51.259: INFO: Pod "pod-subpath-test-downwardapi-rtd6": Phase="Running", Reason="", readiness=true. Elapsed: 6.059335588s
Aug 13 18:53:53.263: INFO: Pod "pod-subpath-test-downwardapi-rtd6": Phase="Running", Reason="", readiness=true. Elapsed: 8.062438329s
Aug 13 18:53:55.265: INFO: Pod "pod-subpath-test-downwardapi-rtd6": Phase="Running", Reason="", readiness=true. Elapsed: 10.065390871s
Aug 13 18:53:57.270: INFO: Pod "pod-subpath-test-downwardapi-rtd6": Phase="Running", Reason="", readiness=true. Elapsed: 12.069877883s
Aug 13 18:53:59.274: INFO: Pod "pod-subpath-test-downwardapi-rtd6": Phase="Running", Reason="", readiness=true. Elapsed: 14.074364829s
Aug 13 18:54:01.279: INFO: Pod "pod-subpath-test-downwardapi-rtd6": Phase="Running", Reason="", readiness=true. Elapsed: 16.078544081s
Aug 13 18:54:03.503: INFO: Pod "pod-subpath-test-downwardapi-rtd6": Phase="Running", Reason="", readiness=true. Elapsed: 18.302437386s
Aug 13 18:54:05.506: INFO: Pod "pod-subpath-test-downwardapi-rtd6": Phase="Running", Reason="", readiness=true. Elapsed: 20.305763508s
Aug 13 18:54:07.509: INFO: Pod "pod-subpath-test-downwardapi-rtd6": Phase="Running", Reason="", readiness=true. Elapsed: 22.309047998s
Aug 13 18:54:09.513: INFO: Pod "pod-subpath-test-downwardapi-rtd6": Phase="Running", Reason="", readiness=true. Elapsed: 24.31253799s
Aug 13 18:54:11.516: INFO: Pod "pod-subpath-test-downwardapi-rtd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.315785183s
STEP: Saw pod success
Aug 13 18:54:11.516: INFO: Pod "pod-subpath-test-downwardapi-rtd6" satisfied condition "Succeeded or Failed"
Aug 13 18:54:11.518: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-downwardapi-rtd6 container test-container-subpath-downwardapi-rtd6: 
STEP: delete the pod
Aug 13 18:54:11.645: INFO: Waiting for pod pod-subpath-test-downwardapi-rtd6 to disappear
Aug 13 18:54:11.690: INFO: Pod pod-subpath-test-downwardapi-rtd6 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-rtd6
Aug 13 18:54:11.690: INFO: Deleting pod "pod-subpath-test-downwardapi-rtd6" in namespace "subpath-5949"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:54:11.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5949" for this suite.

• [SLOW TEST:26.701 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":275,"completed":150,"skipped":2460,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:54:11.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 18:54:11.950: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:54:13.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-9392" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":275,"completed":151,"skipped":2467,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:54:13.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 13 18:54:13.463: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99cee3b9-d736-4048-ac7f-6469d8c790e9" in namespace "projected-3539" to be "Succeeded or Failed"
Aug 13 18:54:13.547: INFO: Pod "downwardapi-volume-99cee3b9-d736-4048-ac7f-6469d8c790e9": Phase="Pending", Reason="", readiness=false. Elapsed: 83.554423ms
Aug 13 18:54:15.550: INFO: Pod "downwardapi-volume-99cee3b9-d736-4048-ac7f-6469d8c790e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087311518s
Aug 13 18:54:17.712: INFO: Pod "downwardapi-volume-99cee3b9-d736-4048-ac7f-6469d8c790e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.249140156s
Aug 13 18:54:19.715: INFO: Pod "downwardapi-volume-99cee3b9-d736-4048-ac7f-6469d8c790e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.252117507s
STEP: Saw pod success
Aug 13 18:54:19.715: INFO: Pod "downwardapi-volume-99cee3b9-d736-4048-ac7f-6469d8c790e9" satisfied condition "Succeeded or Failed"
Aug 13 18:54:19.725: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-99cee3b9-d736-4048-ac7f-6469d8c790e9 container client-container: 
STEP: delete the pod
Aug 13 18:54:19.781: INFO: Waiting for pod downwardapi-volume-99cee3b9-d736-4048-ac7f-6469d8c790e9 to disappear
Aug 13 18:54:19.791: INFO: Pod downwardapi-volume-99cee3b9-d736-4048-ac7f-6469d8c790e9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:54:19.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3539" for this suite.

• [SLOW TEST:6.648 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":152,"skipped":2502,"failed":0}
SSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:54:19.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating all guestbook components
Aug 13 18:54:20.300: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Aug 13 18:54:20.300: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2531'
Aug 13 18:54:24.482: INFO: stderr: ""
Aug 13 18:54:24.482: INFO: stdout: "service/agnhost-slave created\n"
Aug 13 18:54:24.483: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Aug 13 18:54:24.483: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2531'
Aug 13 18:54:24.784: INFO: stderr: ""
Aug 13 18:54:24.784: INFO: stdout: "service/agnhost-master created\n"
Aug 13 18:54:24.784: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 13 18:54:24.784: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2531'
Aug 13 18:54:25.229: INFO: stderr: ""
Aug 13 18:54:25.229: INFO: stdout: "service/frontend created\n"
Aug 13 18:54:25.229: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Aug 13 18:54:25.229: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2531'
Aug 13 18:54:25.534: INFO: stderr: ""
Aug 13 18:54:25.534: INFO: stdout: "deployment.apps/frontend created\n"
Aug 13 18:54:25.534: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 13 18:54:25.534: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2531'
Aug 13 18:54:25.933: INFO: stderr: ""
Aug 13 18:54:25.933: INFO: stdout: "deployment.apps/agnhost-master created\n"
Aug 13 18:54:25.933: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 13 18:54:25.933: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2531'
Aug 13 18:54:26.255: INFO: stderr: ""
Aug 13 18:54:26.255: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Aug 13 18:54:26.255: INFO: Waiting for all frontend pods to be Running.
Aug 13 18:54:41.305: INFO: Waiting for frontend to serve content.
Aug 13 18:54:41.315: INFO: Trying to add a new entry to the guestbook.
Aug 13 18:54:41.327: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 13 18:54:41.332: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2531'
Aug 13 18:54:41.852: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 13 18:54:41.852: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 13 18:54:41.852: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2531'
Aug 13 18:54:42.066: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 13 18:54:42.066: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 13 18:54:42.066: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2531'
Aug 13 18:54:42.298: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 13 18:54:42.298: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 13 18:54:42.298: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2531'
Aug 13 18:54:42.411: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 13 18:54:42.411: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 13 18:54:42.411: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2531'
Aug 13 18:54:43.461: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 13 18:54:43.461: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 13 18:54:43.461: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2531'
Aug 13 18:54:44.060: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 13 18:54:44.060: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:54:44.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2531" for this suite.

• [SLOW TEST:24.356 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:310
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":275,"completed":153,"skipped":2505,"failed":0}
SSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:54:44.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 13 18:54:45.630: INFO: Waiting up to 5m0s for pod "downward-api-71fa494d-1f3a-49b0-b3af-ae09bcdc63cd" in namespace "downward-api-5009" to be "Succeeded or Failed"
Aug 13 18:54:45.639: INFO: Pod "downward-api-71fa494d-1f3a-49b0-b3af-ae09bcdc63cd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.080388ms
Aug 13 18:54:47.863: INFO: Pod "downward-api-71fa494d-1f3a-49b0-b3af-ae09bcdc63cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.23353634s
Aug 13 18:54:49.873: INFO: Pod "downward-api-71fa494d-1f3a-49b0-b3af-ae09bcdc63cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243151639s
Aug 13 18:54:52.102: INFO: Pod "downward-api-71fa494d-1f3a-49b0-b3af-ae09bcdc63cd": Phase="Running", Reason="", readiness=true. Elapsed: 6.472190399s
Aug 13 18:54:54.106: INFO: Pod "downward-api-71fa494d-1f3a-49b0-b3af-ae09bcdc63cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.476257621s
STEP: Saw pod success
Aug 13 18:54:54.106: INFO: Pod "downward-api-71fa494d-1f3a-49b0-b3af-ae09bcdc63cd" satisfied condition "Succeeded or Failed"
Aug 13 18:54:54.109: INFO: Trying to get logs from node kali-worker pod downward-api-71fa494d-1f3a-49b0-b3af-ae09bcdc63cd container dapi-container: 
STEP: delete the pod
Aug 13 18:54:54.185: INFO: Waiting for pod downward-api-71fa494d-1f3a-49b0-b3af-ae09bcdc63cd to disappear
Aug 13 18:54:54.196: INFO: Pod downward-api-71fa494d-1f3a-49b0-b3af-ae09bcdc63cd no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:54:54.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5009" for this suite.

• [SLOW TEST:9.947 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":275,"completed":154,"skipped":2513,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:54:54.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 18:55:02.244: INFO: Waiting up to 5m0s for pod "client-envvars-da9981b2-7709-425b-ab17-6faab6c0860f" in namespace "pods-5419" to be "Succeeded or Failed"
Aug 13 18:55:02.611: INFO: Pod "client-envvars-da9981b2-7709-425b-ab17-6faab6c0860f": Phase="Pending", Reason="", readiness=false. Elapsed: 367.153251ms
Aug 13 18:55:04.676: INFO: Pod "client-envvars-da9981b2-7709-425b-ab17-6faab6c0860f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.431913978s
Aug 13 18:55:06.711: INFO: Pod "client-envvars-da9981b2-7709-425b-ab17-6faab6c0860f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.467628603s
Aug 13 18:55:08.715: INFO: Pod "client-envvars-da9981b2-7709-425b-ab17-6faab6c0860f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.471117115s
STEP: Saw pod success
Aug 13 18:55:08.715: INFO: Pod "client-envvars-da9981b2-7709-425b-ab17-6faab6c0860f" satisfied condition "Succeeded or Failed"
Aug 13 18:55:08.717: INFO: Trying to get logs from node kali-worker pod client-envvars-da9981b2-7709-425b-ab17-6faab6c0860f container env3cont: 
STEP: delete the pod
Aug 13 18:55:08.994: INFO: Waiting for pod client-envvars-da9981b2-7709-425b-ab17-6faab6c0860f to disappear
Aug 13 18:55:09.083: INFO: Pod client-envvars-da9981b2-7709-425b-ab17-6faab6c0860f no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:55:09.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5419" for this suite.

• [SLOW TEST:14.887 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":275,"completed":155,"skipped":2537,"failed":0}
SSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:55:09.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 13 18:55:09.265: INFO: Waiting up to 5m0s for pod "downward-api-255225db-b00a-4aa6-af50-e2c2cc365919" in namespace "downward-api-2224" to be "Succeeded or Failed"
Aug 13 18:55:09.308: INFO: Pod "downward-api-255225db-b00a-4aa6-af50-e2c2cc365919": Phase="Pending", Reason="", readiness=false. Elapsed: 42.520694ms
Aug 13 18:55:11.312: INFO: Pod "downward-api-255225db-b00a-4aa6-af50-e2c2cc365919": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047257574s
Aug 13 18:55:13.317: INFO: Pod "downward-api-255225db-b00a-4aa6-af50-e2c2cc365919": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05200648s
Aug 13 18:55:15.352: INFO: Pod "downward-api-255225db-b00a-4aa6-af50-e2c2cc365919": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.086975602s
STEP: Saw pod success
Aug 13 18:55:15.352: INFO: Pod "downward-api-255225db-b00a-4aa6-af50-e2c2cc365919" satisfied condition "Succeeded or Failed"
Aug 13 18:55:15.355: INFO: Trying to get logs from node kali-worker pod downward-api-255225db-b00a-4aa6-af50-e2c2cc365919 container dapi-container: 
STEP: delete the pod
Aug 13 18:55:15.390: INFO: Waiting for pod downward-api-255225db-b00a-4aa6-af50-e2c2cc365919 to disappear
Aug 13 18:55:15.407: INFO: Pod downward-api-255225db-b00a-4aa6-af50-e2c2cc365919 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:55:15.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2224" for this suite.

• [SLOW TEST:6.324 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":275,"completed":156,"skipped":2540,"failed":0}
SSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:55:15.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 13 18:55:25.814: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 13 18:55:25.821: INFO: Pod pod-with-prestop-http-hook still exists
Aug 13 18:55:27.821: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 13 18:55:27.826: INFO: Pod pod-with-prestop-http-hook still exists
Aug 13 18:55:29.821: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 13 18:55:29.825: INFO: Pod pod-with-prestop-http-hook still exists
Aug 13 18:55:31.822: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 13 18:55:31.825: INFO: Pod pod-with-prestop-http-hook still exists
Aug 13 18:55:33.821: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 13 18:55:33.825: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:55:33.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7140" for this suite.

• [SLOW TEST:18.423 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":275,"completed":157,"skipped":2546,"failed":0}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:55:33.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 13 18:55:34.319: INFO: Waiting up to 5m0s for pod "pod-e665acdf-2936-4aea-8dab-093cb84456cf" in namespace "emptydir-9336" to be "Succeeded or Failed"
Aug 13 18:55:34.324: INFO: Pod "pod-e665acdf-2936-4aea-8dab-093cb84456cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.149079ms
Aug 13 18:55:36.327: INFO: Pod "pod-e665acdf-2936-4aea-8dab-093cb84456cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007578341s
Aug 13 18:55:38.331: INFO: Pod "pod-e665acdf-2936-4aea-8dab-093cb84456cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.011180152s
Aug 13 18:55:41.085: INFO: Pod "pod-e665acdf-2936-4aea-8dab-093cb84456cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.765509493s
Aug 13 18:55:43.223: INFO: Pod "pod-e665acdf-2936-4aea-8dab-093cb84456cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.903364092s
STEP: Saw pod success
Aug 13 18:55:43.223: INFO: Pod "pod-e665acdf-2936-4aea-8dab-093cb84456cf" satisfied condition "Succeeded or Failed"
Aug 13 18:55:43.528: INFO: Trying to get logs from node kali-worker pod pod-e665acdf-2936-4aea-8dab-093cb84456cf container test-container: 
STEP: delete the pod
Aug 13 18:55:44.439: INFO: Waiting for pod pod-e665acdf-2936-4aea-8dab-093cb84456cf to disappear
Aug 13 18:55:44.493: INFO: Pod pod-e665acdf-2936-4aea-8dab-093cb84456cf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:55:44.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9336" for this suite.

• [SLOW TEST:10.666 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":158,"skipped":2550,"failed":0}
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:55:44.503: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-9228
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Aug 13 18:55:46.438: INFO: Found 0 stateful pods, waiting for 3
Aug 13 18:55:56.953: INFO: Found 2 stateful pods, waiting for 3
Aug 13 18:56:06.497: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 13 18:56:06.497: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 13 18:56:06.497: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 13 18:56:16.443: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 13 18:56:16.443: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 13 18:56:16.443: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 13 18:56:16.454: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9228 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 13 18:56:16.677: INFO: stderr: "I0813 18:56:16.574645    1936 log.go:172] (0xc000aa8000) (0xc00028a140) Create stream\nI0813 18:56:16.574692    1936 log.go:172] (0xc000aa8000) (0xc00028a140) Stream added, broadcasting: 1\nI0813 18:56:16.577004    1936 log.go:172] (0xc000aa8000) Reply frame received for 1\nI0813 18:56:16.577035    1936 log.go:172] (0xc000aa8000) (0xc000810000) Create stream\nI0813 18:56:16.577048    1936 log.go:172] (0xc000aa8000) (0xc000810000) Stream added, broadcasting: 3\nI0813 18:56:16.577849    1936 log.go:172] (0xc000aa8000) Reply frame received for 3\nI0813 18:56:16.577869    1936 log.go:172] (0xc000aa8000) (0xc00028bd60) Create stream\nI0813 18:56:16.577876    1936 log.go:172] (0xc000aa8000) (0xc00028bd60) Stream added, broadcasting: 5\nI0813 18:56:16.578702    1936 log.go:172] (0xc000aa8000) Reply frame received for 5\nI0813 18:56:16.633792    1936 log.go:172] (0xc000aa8000) Data frame received for 5\nI0813 18:56:16.633813    1936 log.go:172] (0xc00028bd60) (5) Data frame handling\nI0813 18:56:16.633825    1936 log.go:172] (0xc00028bd60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0813 18:56:16.668464    1936 log.go:172] (0xc000aa8000) Data frame received for 3\nI0813 18:56:16.668493    1936 log.go:172] (0xc000810000) (3) Data frame handling\nI0813 18:56:16.668517    1936 log.go:172] (0xc000810000) (3) Data frame sent\nI0813 18:56:16.668539    1936 log.go:172] (0xc000aa8000) Data frame received for 3\nI0813 18:56:16.668556    1936 log.go:172] (0xc000810000) (3) Data frame handling\nI0813 18:56:16.668581    1936 log.go:172] (0xc000aa8000) Data frame received for 5\nI0813 18:56:16.668602    1936 log.go:172] (0xc00028bd60) (5) Data frame handling\nI0813 18:56:16.670377    1936 log.go:172] (0xc000aa8000) Data frame received for 1\nI0813 18:56:16.670393    1936 log.go:172] (0xc00028a140) (1) Data frame handling\nI0813 18:56:16.670404    1936 log.go:172] (0xc00028a140) (1) Data frame sent\nI0813 18:56:16.670412    1936 log.go:172] (0xc000aa8000) (0xc00028a140) Stream removed, broadcasting: 1\nI0813 18:56:16.670659    1936 log.go:172] (0xc000aa8000) Go away received\nI0813 18:56:16.670843    1936 log.go:172] (0xc000aa8000) (0xc00028a140) Stream removed, broadcasting: 1\nI0813 18:56:16.670872    1936 log.go:172] (0xc000aa8000) (0xc000810000) Stream removed, broadcasting: 3\nI0813 18:56:16.670897    1936 log.go:172] (0xc000aa8000) (0xc00028bd60) Stream removed, broadcasting: 5\n"
Aug 13 18:56:16.677: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 13 18:56:16.677: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 13 18:56:26.708: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 13 18:56:36.761: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9228 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 13 18:56:36.960: INFO: stderr: "I0813 18:56:36.888822    1957 log.go:172] (0xc000534840) (0xc0006575e0) Create stream\nI0813 18:56:36.888873    1957 log.go:172] (0xc000534840) (0xc0006575e0) Stream added, broadcasting: 1\nI0813 18:56:36.894641    1957 log.go:172] (0xc000534840) Reply frame received for 1\nI0813 18:56:36.894680    1957 log.go:172] (0xc000534840) (0xc0008aa000) Create stream\nI0813 18:56:36.894690    1957 log.go:172] (0xc000534840) (0xc0008aa000) Stream added, broadcasting: 3\nI0813 18:56:36.895639    1957 log.go:172] (0xc000534840) Reply frame received for 3\nI0813 18:56:36.895688    1957 log.go:172] (0xc000534840) (0xc000657720) Create stream\nI0813 18:56:36.895703    1957 log.go:172] (0xc000534840) (0xc000657720) Stream added, broadcasting: 5\nI0813 18:56:36.896490    1957 log.go:172] (0xc000534840) Reply frame received for 5\nI0813 18:56:36.952237    1957 log.go:172] (0xc000534840) Data frame received for 3\nI0813 18:56:36.952285    1957 log.go:172] (0xc0008aa000) (3) Data frame handling\nI0813 18:56:36.952313    1957 log.go:172] (0xc0008aa000) (3) Data frame sent\nI0813 18:56:36.952334    1957 log.go:172] (0xc000534840) Data frame received for 5\nI0813 18:56:36.952357    1957 log.go:172] (0xc000657720) (5) Data frame handling\nI0813 18:56:36.952371    1957 log.go:172] (0xc000657720) (5) Data frame sent\nI0813 18:56:36.952383    1957 log.go:172] (0xc000534840) Data frame received for 5\nI0813 18:56:36.952393    1957 log.go:172] (0xc000657720) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0813 18:56:36.952417    1957 log.go:172] (0xc000534840) Data frame received for 3\nI0813 18:56:36.952445    1957 log.go:172] (0xc0008aa000) (3) Data frame handling\nI0813 18:56:36.953823    1957 log.go:172] (0xc000534840) Data frame received for 1\nI0813 18:56:36.953843    1957 log.go:172] (0xc0006575e0) (1) Data frame handling\nI0813 18:56:36.953863    1957 log.go:172] (0xc0006575e0) (1) Data frame sent\nI0813 18:56:36.953880    1957 log.go:172] (0xc000534840) (0xc0006575e0) Stream removed, broadcasting: 1\nI0813 18:56:36.954109    1957 log.go:172] (0xc000534840) Go away received\nI0813 18:56:36.954227    1957 log.go:172] (0xc000534840) (0xc0006575e0) Stream removed, broadcasting: 1\nI0813 18:56:36.954247    1957 log.go:172] (0xc000534840) (0xc0008aa000) Stream removed, broadcasting: 3\nI0813 18:56:36.954260    1957 log.go:172] (0xc000534840) (0xc000657720) Stream removed, broadcasting: 5\n"
Aug 13 18:56:36.960: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 13 18:56:36.960: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 13 18:56:47.002: INFO: Waiting for StatefulSet statefulset-9228/ss2 to complete update
Aug 13 18:56:47.002: INFO: Waiting for Pod statefulset-9228/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 13 18:56:47.002: INFO: Waiting for Pod statefulset-9228/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 13 18:56:57.011: INFO: Waiting for StatefulSet statefulset-9228/ss2 to complete update
Aug 13 18:56:57.011: INFO: Waiting for Pod statefulset-9228/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 13 18:56:57.011: INFO: Waiting for Pod statefulset-9228/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 13 18:57:07.009: INFO: Waiting for StatefulSet statefulset-9228/ss2 to complete update
Aug 13 18:57:07.010: INFO: Waiting for Pod statefulset-9228/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 13 18:57:17.029: INFO: Waiting for StatefulSet statefulset-9228/ss2 to complete update
STEP: Rolling back to a previous revision
Aug 13 18:57:27.009: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9228 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Aug 13 18:57:27.322: INFO: stderr: "I0813 18:57:27.130614    1977 log.go:172] (0xc000a4e0b0) (0xc000b185a0) Create stream\nI0813 18:57:27.130688    1977 log.go:172] (0xc000a4e0b0) (0xc000b185a0) Stream added, broadcasting: 1\nI0813 18:57:27.133565    1977 log.go:172] (0xc000a4e0b0) Reply frame received for 1\nI0813 18:57:27.133601    1977 log.go:172] (0xc000a4e0b0) (0xc0009c03c0) Create stream\nI0813 18:57:27.133610    1977 log.go:172] (0xc000a4e0b0) (0xc0009c03c0) Stream added, broadcasting: 3\nI0813 18:57:27.134575    1977 log.go:172] (0xc000a4e0b0) Reply frame received for 3\nI0813 18:57:27.134626    1977 log.go:172] (0xc000a4e0b0) (0xc0009c0460) Create stream\nI0813 18:57:27.134648    1977 log.go:172] (0xc000a4e0b0) (0xc0009c0460) Stream added, broadcasting: 5\nI0813 18:57:27.135469    1977 log.go:172] (0xc000a4e0b0) Reply frame received for 5\nI0813 18:57:27.193620    1977 log.go:172] (0xc000a4e0b0) Data frame received for 5\nI0813 18:57:27.193649    1977 log.go:172] (0xc0009c0460) (5) Data frame handling\nI0813 18:57:27.193666    1977 log.go:172] (0xc0009c0460) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0813 18:57:27.305219    1977 log.go:172] (0xc000a4e0b0) Data frame received for 3\nI0813 18:57:27.305252    1977 log.go:172] (0xc0009c03c0) (3) Data frame handling\nI0813 18:57:27.305276    1977 log.go:172] (0xc0009c03c0) (3) Data frame sent\nI0813 18:57:27.312903    1977 log.go:172] (0xc000a4e0b0) Data frame received for 3\nI0813 18:57:27.312924    1977 log.go:172] (0xc0009c03c0) (3) Data frame handling\nI0813 18:57:27.312939    1977 log.go:172] (0xc000a4e0b0) Data frame received for 5\nI0813 18:57:27.312949    1977 log.go:172] (0xc0009c0460) (5) Data frame handling\nI0813 18:57:27.314880    1977 log.go:172] (0xc000a4e0b0) Data frame received for 1\nI0813 18:57:27.314959    1977 log.go:172] (0xc000b185a0) (1) Data frame handling\nI0813 18:57:27.315034    1977 log.go:172] (0xc000b185a0) (1) Data frame sent\nI0813 18:57:27.315057    1977 log.go:172] (0xc000a4e0b0) (0xc000b185a0) Stream removed, broadcasting: 1\nI0813 18:57:27.315077    1977 log.go:172] (0xc000a4e0b0) Go away received\nI0813 18:57:27.315614    1977 log.go:172] (0xc000a4e0b0) (0xc000b185a0) Stream removed, broadcasting: 1\nI0813 18:57:27.315638    1977 log.go:172] (0xc000a4e0b0) (0xc0009c03c0) Stream removed, broadcasting: 3\nI0813 18:57:27.315651    1977 log.go:172] (0xc000a4e0b0) (0xc0009c0460) Stream removed, broadcasting: 5\n"
Aug 13 18:57:27.323: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Aug 13 18:57:27.323: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Aug 13 18:57:37.369: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 13 18:57:47.491: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=statefulset-9228 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Aug 13 18:57:47.709: INFO: stderr: "I0813 18:57:47.643223    1996 log.go:172] (0xc00003ae70) (0xc0006c75e0) Create stream\nI0813 18:57:47.643325    1996 log.go:172] (0xc00003ae70) (0xc0006c75e0) Stream added, broadcasting: 1\nI0813 18:57:47.646233    1996 log.go:172] (0xc00003ae70) Reply frame received for 1\nI0813 18:57:47.646282    1996 log.go:172] (0xc00003ae70) (0xc0009d6000) Create stream\nI0813 18:57:47.646305    1996 log.go:172] (0xc00003ae70) (0xc0009d6000) Stream added, broadcasting: 3\nI0813 18:57:47.647306    1996 log.go:172] (0xc00003ae70) Reply frame received for 3\nI0813 18:57:47.647334    1996 log.go:172] (0xc00003ae70) (0xc000396000) Create stream\nI0813 18:57:47.647344    1996 log.go:172] (0xc00003ae70) (0xc000396000) Stream added, broadcasting: 5\nI0813 18:57:47.648416    1996 log.go:172] (0xc00003ae70) Reply frame received for 5\nI0813 18:57:47.701049    1996 log.go:172] (0xc00003ae70) Data frame received for 5\nI0813 18:57:47.701079    1996 log.go:172] (0xc000396000) (5) Data frame handling\nI0813 18:57:47.701092    1996 log.go:172] (0xc000396000) (5) Data frame sent\nI0813 18:57:47.701100    1996 log.go:172] (0xc00003ae70) Data frame received for 5\nI0813 18:57:47.701108    1996 log.go:172] (0xc000396000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0813 18:57:47.701128    1996 log.go:172] (0xc00003ae70) Data frame received for 3\nI0813 18:57:47.701141    1996 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0813 18:57:47.701187    1996 log.go:172] (0xc0009d6000) (3) Data frame sent\nI0813 18:57:47.701218    1996 log.go:172] (0xc00003ae70) Data frame received for 3\nI0813 18:57:47.701233    1996 log.go:172] (0xc0009d6000) (3) Data frame handling\nI0813 18:57:47.702673    1996 log.go:172] (0xc00003ae70) Data frame received for 1\nI0813 18:57:47.702705    1996 log.go:172] (0xc0006c75e0) (1) Data frame handling\nI0813 18:57:47.702719    1996 log.go:172] (0xc0006c75e0) (1) Data frame sent\nI0813 18:57:47.702735    1996 log.go:172] (0xc00003ae70) (0xc0006c75e0) Stream removed, broadcasting: 1\nI0813 18:57:47.702754    1996 log.go:172] (0xc00003ae70) Go away received\nI0813 18:57:47.703167    1996 log.go:172] (0xc00003ae70) (0xc0006c75e0) Stream removed, broadcasting: 1\nI0813 18:57:47.703192    1996 log.go:172] (0xc00003ae70) (0xc0009d6000) Stream removed, broadcasting: 3\nI0813 18:57:47.703206    1996 log.go:172] (0xc00003ae70) (0xc000396000) Stream removed, broadcasting: 5\n"
Aug 13 18:57:47.710: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Aug 13 18:57:47.710: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Aug 13 18:57:57.731: INFO: Waiting for StatefulSet statefulset-9228/ss2 to complete update
Aug 13 18:57:57.731: INFO: Waiting for Pod statefulset-9228/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 13 18:57:57.731: INFO: Waiting for Pod statefulset-9228/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 13 18:58:07.739: INFO: Waiting for StatefulSet statefulset-9228/ss2 to complete update
Aug 13 18:58:07.739: INFO: Waiting for Pod statefulset-9228/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 13 18:58:17.740: INFO: Waiting for StatefulSet statefulset-9228/ss2 to complete update
Aug 13 18:58:17.740: INFO: Waiting for Pod statefulset-9228/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Aug 13 18:58:27.914: INFO: Waiting for StatefulSet statefulset-9228/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 13 18:58:37.739: INFO: Deleting all statefulset in ns statefulset-9228
Aug 13 18:58:37.742: INFO: Scaling statefulset ss2 to 0
Aug 13 18:58:57.779: INFO: Waiting for statefulset status.replicas updated to 0
Aug 13 18:58:57.782: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:58:57.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9228" for this suite.

• [SLOW TEST:193.303 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":275,"completed":159,"skipped":2557,"failed":0}
SS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:58:57.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-2l5lf in namespace proxy-256
I0813 18:58:57.926130       7 runners.go:190] Created replication controller with name: proxy-service-2l5lf, namespace: proxy-256, replica count: 1
I0813 18:58:58.976548       7 runners.go:190] proxy-service-2l5lf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0813 18:58:59.976837       7 runners.go:190] proxy-service-2l5lf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0813 18:59:00.977076       7 runners.go:190] proxy-service-2l5lf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0813 18:59:01.977295       7 runners.go:190] proxy-service-2l5lf Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0813 18:59:02.977544       7 runners.go:190] proxy-service-2l5lf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0813 18:59:03.977787       7 runners.go:190] proxy-service-2l5lf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0813 18:59:04.977991       7 runners.go:190] proxy-service-2l5lf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0813 18:59:05.978196       7 runners.go:190] proxy-service-2l5lf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0813 18:59:06.978415       7 runners.go:190] proxy-service-2l5lf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0813 18:59:07.978681       7 runners.go:190] proxy-service-2l5lf Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0813 18:59:08.978917       7 runners.go:190] proxy-service-2l5lf Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 13 18:59:09.009: INFO: setup took 11.157579344s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 13 18:59:09.024: INFO: (0) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 14.848361ms)
Aug 13 18:59:09.024: INFO: (0) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 14.930414ms)
Aug 13 18:59:09.024: INFO: (0) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 15.023247ms)
Aug 13 18:59:09.025: INFO: (0) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:1080/proxy/: testt... (200; 34.916324ms)
Aug 13 18:59:09.045: INFO: (0) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname1/proxy/: foo (200; 35.227385ms)
Aug 13 18:59:09.045: INFO: (0) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t/proxy/: test (200; 35.617103ms)
Aug 13 18:59:09.055: INFO: (0) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:460/proxy/: tls baz (200; 45.598647ms)
Aug 13 18:59:09.055: INFO: (0) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname1/proxy/: tls baz (200; 45.74645ms)
Aug 13 18:59:09.058: INFO: (0) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:443/proxy/: t... (200; 5.166269ms)
Aug 13 18:59:09.064: INFO: (1) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t/proxy/: test (200; 5.102875ms)
Aug 13 18:59:09.064: INFO: (1) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:443/proxy/: testtest (200; 8.029598ms)
Aug 13 18:59:09.113: INFO: (2) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 8.08772ms)
Aug 13 18:59:09.113: INFO: (2) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:460/proxy/: tls baz (200; 8.225863ms)
Aug 13 18:59:09.113: INFO: (2) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 8.196335ms)
Aug 13 18:59:09.114: INFO: (2) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:443/proxy/: testt... (200; 10.21ms)
Aug 13 18:59:09.116: INFO: (2) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname2/proxy/: bar (200; 11.122385ms)
Aug 13 18:59:09.117: INFO: (2) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname1/proxy/: foo (200; 11.418382ms)
Aug 13 18:59:09.119: INFO: (3) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 2.563884ms)
Aug 13 18:59:09.122: INFO: (3) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname2/proxy/: bar (200; 5.01907ms)
Aug 13 18:59:09.122: INFO: (3) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:1080/proxy/: t... (200; 5.327381ms)
Aug 13 18:59:09.122: INFO: (3) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:462/proxy/: tls qux (200; 5.722101ms)
Aug 13 18:59:09.123: INFO: (3) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname1/proxy/: tls baz (200; 5.834389ms)
Aug 13 18:59:09.122: INFO: (3) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname1/proxy/: foo (200; 5.86446ms)
Aug 13 18:59:09.123: INFO: (3) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname2/proxy/: tls qux (200; 5.780783ms)
Aug 13 18:59:09.123: INFO: (3) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname2/proxy/: bar (200; 5.861506ms)
Aug 13 18:59:09.123: INFO: (3) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:443/proxy/: testtest (200; 6.149406ms)
Aug 13 18:59:09.123: INFO: (3) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname1/proxy/: foo (200; 6.197737ms)
Aug 13 18:59:09.123: INFO: (3) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 6.611517ms)
Aug 13 18:59:09.129: INFO: (4) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname1/proxy/: foo (200; 5.810295ms)
Aug 13 18:59:09.129: INFO: (4) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:1080/proxy/: t... (200; 5.786009ms)
Aug 13 18:59:09.129: INFO: (4) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 5.76553ms)
Aug 13 18:59:09.130: INFO: (4) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 6.418806ms)
Aug 13 18:59:09.130: INFO: (4) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t/proxy/: test (200; 6.503058ms)
Aug 13 18:59:09.130: INFO: (4) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname1/proxy/: tls baz (200; 6.550877ms)
Aug 13 18:59:09.130: INFO: (4) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:1080/proxy/: testtest (200; 5.731728ms)
Aug 13 18:59:09.136: INFO: (5) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 5.694407ms)
Aug 13 18:59:09.136: INFO: (5) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:1080/proxy/: t... (200; 5.778327ms)
Aug 13 18:59:09.136: INFO: (5) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:1080/proxy/: testt... (200; 3.905723ms)
Aug 13 18:59:09.141: INFO: (6) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:1080/proxy/: testtest (200; 4.566799ms)
Aug 13 18:59:09.142: INFO: (6) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname2/proxy/: bar (200; 5.166044ms)
Aug 13 18:59:09.142: INFO: (6) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname2/proxy/: tls qux (200; 5.582271ms)
Aug 13 18:59:09.142: INFO: (6) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname1/proxy/: foo (200; 5.57248ms)
Aug 13 18:59:09.142: INFO: (6) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname1/proxy/: tls baz (200; 5.698634ms)
Aug 13 18:59:09.142: INFO: (6) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname1/proxy/: foo (200; 5.666426ms)
Aug 13 18:59:09.176: INFO: (7) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 33.387783ms)
Aug 13 18:59:09.176: INFO: (7) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 33.50029ms)
Aug 13 18:59:09.176: INFO: (7) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:443/proxy/: t... (200; 33.543803ms)
Aug 13 18:59:09.176: INFO: (7) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:462/proxy/: tls qux (200; 33.592376ms)
Aug 13 18:59:09.176: INFO: (7) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 33.611687ms)
Aug 13 18:59:09.176: INFO: (7) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t/proxy/: test (200; 33.674326ms)
Aug 13 18:59:09.176: INFO: (7) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 34.108462ms)
Aug 13 18:59:09.177: INFO: (7) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:1080/proxy/: testtesttest (200; 7.817033ms)
Aug 13 18:59:09.186: INFO: (8) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:1080/proxy/: t... (200; 8.022074ms)
Aug 13 18:59:09.186: INFO: (8) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 8.084183ms)
Aug 13 18:59:09.186: INFO: (8) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname1/proxy/: foo (200; 8.269552ms)
Aug 13 18:59:09.186: INFO: (8) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname1/proxy/: tls baz (200; 8.394829ms)
Aug 13 18:59:09.187: INFO: (8) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 8.449948ms)
Aug 13 18:59:09.187: INFO: (8) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 8.278996ms)
Aug 13 18:59:09.187: INFO: (8) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname2/proxy/: bar (200; 8.322969ms)
Aug 13 18:59:09.187: INFO: (8) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:460/proxy/: tls baz (200; 8.867417ms)
Aug 13 18:59:09.198: INFO: (9) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname2/proxy/: tls qux (200; 10.733359ms)
Aug 13 18:59:09.198: INFO: (9) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname2/proxy/: bar (200; 10.757363ms)
Aug 13 18:59:09.198: INFO: (9) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname1/proxy/: tls baz (200; 10.657351ms)
Aug 13 18:59:09.198: INFO: (9) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname1/proxy/: foo (200; 10.825519ms)
Aug 13 18:59:09.198: INFO: (9) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname2/proxy/: bar (200; 10.753086ms)
Aug 13 18:59:09.198: INFO: (9) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname1/proxy/: foo (200; 10.81869ms)
Aug 13 18:59:09.199: INFO: (9) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t/proxy/: test (200; 11.400212ms)
Aug 13 18:59:09.199: INFO: (9) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:1080/proxy/: t... (200; 11.603381ms)
Aug 13 18:59:09.199: INFO: (9) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 11.868052ms)
Aug 13 18:59:09.199: INFO: (9) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:462/proxy/: tls qux (200; 11.897961ms)
Aug 13 18:59:09.199: INFO: (9) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 11.973302ms)
Aug 13 18:59:09.199: INFO: (9) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:460/proxy/: tls baz (200; 12.026286ms)
Aug 13 18:59:09.199: INFO: (9) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 11.955398ms)
Aug 13 18:59:09.199: INFO: (9) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 12.092175ms)
Aug 13 18:59:09.199: INFO: (9) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:443/proxy/: testtestt... (200; 4.604255ms)
Aug 13 18:59:09.204: INFO: (10) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname2/proxy/: bar (200; 4.831809ms)
Aug 13 18:59:09.204: INFO: (10) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname2/proxy/: bar (200; 4.86738ms)
Aug 13 18:59:09.204: INFO: (10) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname2/proxy/: tls qux (200; 4.840985ms)
Aug 13 18:59:09.204: INFO: (10) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname1/proxy/: foo (200; 4.815959ms)
Aug 13 18:59:09.205: INFO: (10) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname1/proxy/: tls baz (200; 5.301802ms)
Aug 13 18:59:09.205: INFO: (10) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 5.226933ms)
Aug 13 18:59:09.205: INFO: (10) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t/proxy/: test (200; 5.339637ms)
Aug 13 18:59:09.205: INFO: (10) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:460/proxy/: tls baz (200; 5.25708ms)
Aug 13 18:59:09.205: INFO: (10) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:443/proxy/: testt... (200; 4.935062ms)
Aug 13 18:59:09.210: INFO: (11) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t/proxy/: test (200; 5.006564ms)
Aug 13 18:59:09.210: INFO: (11) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname1/proxy/: foo (200; 5.103365ms)
Aug 13 18:59:09.210: INFO: (11) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname1/proxy/: foo (200; 5.209971ms)
Aug 13 18:59:09.210: INFO: (11) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname2/proxy/: bar (200; 5.426649ms)
Aug 13 18:59:09.210: INFO: (11) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname1/proxy/: tls baz (200; 5.390656ms)
Aug 13 18:59:09.210: INFO: (11) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname2/proxy/: tls qux (200; 5.505393ms)
Aug 13 18:59:09.210: INFO: (11) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname2/proxy/: bar (200; 5.534913ms)
Aug 13 18:59:09.213: INFO: (12) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:460/proxy/: tls baz (200; 2.700191ms)
Aug 13 18:59:09.214: INFO: (12) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:1080/proxy/: testt... (200; 3.220573ms)
Aug 13 18:59:09.214: INFO: (12) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 3.40206ms)
Aug 13 18:59:09.215: INFO: (12) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 4.305702ms)
Aug 13 18:59:09.215: INFO: (12) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname2/proxy/: bar (200; 4.547193ms)
Aug 13 18:59:09.215: INFO: (12) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname1/proxy/: foo (200; 4.569007ms)
Aug 13 18:59:09.215: INFO: (12) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname2/proxy/: bar (200; 4.686848ms)
Aug 13 18:59:09.215: INFO: (12) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname1/proxy/: tls baz (200; 4.63958ms)
Aug 13 18:59:09.215: INFO: (12) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:443/proxy/: test (200; 5.239319ms)
Aug 13 18:59:09.216: INFO: (12) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 5.322132ms)
Aug 13 18:59:09.216: INFO: (12) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:462/proxy/: tls qux (200; 5.311208ms)
Aug 13 18:59:09.216: INFO: (12) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname1/proxy/: foo (200; 5.246451ms)
Aug 13 18:59:09.220: INFO: (13) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:462/proxy/: tls qux (200; 3.941364ms)
Aug 13 18:59:09.220: INFO: (13) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:443/proxy/: test (200; 4.247375ms)
Aug 13 18:59:09.220: INFO: (13) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 4.210067ms)
Aug 13 18:59:09.221: INFO: (13) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname1/proxy/: foo (200; 5.129744ms)
Aug 13 18:59:09.221: INFO: (13) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 5.201354ms)
Aug 13 18:59:09.221: INFO: (13) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:1080/proxy/: t... (200; 5.375885ms)
Aug 13 18:59:09.221: INFO: (13) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname1/proxy/: tls baz (200; 5.30686ms)
Aug 13 18:59:09.221: INFO: (13) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname2/proxy/: bar (200; 5.405768ms)
Aug 13 18:59:09.222: INFO: (13) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname1/proxy/: foo (200; 5.484317ms)
Aug 13 18:59:09.222: INFO: (13) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname2/proxy/: bar (200; 5.550866ms)
Aug 13 18:59:09.222: INFO: (13) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname2/proxy/: tls qux (200; 5.470304ms)
Aug 13 18:59:09.222: INFO: (13) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:460/proxy/: tls baz (200; 5.729518ms)
Aug 13 18:59:09.222: INFO: (13) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:1080/proxy/: testtest (200; 5.43737ms)
Aug 13 18:59:09.228: INFO: (14) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:1080/proxy/: t... (200; 5.472975ms)
Aug 13 18:59:09.228: INFO: (14) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 5.927053ms)
Aug 13 18:59:09.228: INFO: (14) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 5.889726ms)
Aug 13 18:59:09.228: INFO: (14) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:1080/proxy/: testt... (200; 3.783342ms)
Aug 13 18:59:09.232: INFO: (15) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 3.637611ms)
Aug 13 18:59:09.232: INFO: (15) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t/proxy/: test (200; 3.677952ms)
Aug 13 18:59:09.232: INFO: (15) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:462/proxy/: tls qux (200; 3.852397ms)
Aug 13 18:59:09.232: INFO: (15) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 3.693658ms)
Aug 13 18:59:09.232: INFO: (15) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:1080/proxy/: testtest (200; 4.514309ms)
Aug 13 18:59:09.238: INFO: (16) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:460/proxy/: tls baz (200; 4.761035ms)
Aug 13 18:59:09.239: INFO: (16) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:462/proxy/: tls qux (200; 5.153437ms)
Aug 13 18:59:09.239: INFO: (16) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:443/proxy/: testt... (200; 5.668207ms)
Aug 13 18:59:09.239: INFO: (16) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname1/proxy/: foo (200; 5.581995ms)
Aug 13 18:59:09.239: INFO: (16) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 5.59783ms)
Aug 13 18:59:09.242: INFO: (17) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t/proxy/: test (200; 2.97066ms)
Aug 13 18:59:09.242: INFO: (17) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 3.112983ms)
Aug 13 18:59:09.243: INFO: (17) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:460/proxy/: tls baz (200; 3.339077ms)
Aug 13 18:59:09.243: INFO: (17) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 3.772339ms)
Aug 13 18:59:09.243: INFO: (17) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 3.900915ms)
Aug 13 18:59:09.243: INFO: (17) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname1/proxy/: foo (200; 4.131858ms)
Aug 13 18:59:09.244: INFO: (17) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:1080/proxy/: t... (200; 4.289149ms)
Aug 13 18:59:09.244: INFO: (17) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 4.313441ms)
Aug 13 18:59:09.244: INFO: (17) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname2/proxy/: bar (200; 4.327732ms)
Aug 13 18:59:09.244: INFO: (17) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:443/proxy/: testtesttest (200; 4.938813ms)
Aug 13 18:59:09.249: INFO: (18) /api/v1/namespaces/proxy-256/pods/https:proxy-service-2l5lf-4cx5t:443/proxy/: t... (200; 5.532839ms)
Aug 13 18:59:09.250: INFO: (18) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname1/proxy/: tls baz (200; 5.838613ms)
Aug 13 18:59:09.250: INFO: (18) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname1/proxy/: foo (200; 5.75965ms)
Aug 13 18:59:09.250: INFO: (18) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname2/proxy/: tls qux (200; 5.750022ms)
Aug 13 18:59:09.252: INFO: (18) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname2/proxy/: bar (200; 7.78329ms)
Aug 13 18:59:09.252: INFO: (18) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname2/proxy/: bar (200; 7.808391ms)
Aug 13 18:59:09.257: INFO: (19) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname1/proxy/: foo (200; 4.178829ms)
Aug 13 18:59:09.257: INFO: (19) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:162/proxy/: bar (200; 4.321846ms)
Aug 13 18:59:09.257: INFO: (19) /api/v1/namespaces/proxy-256/services/https:proxy-service-2l5lf:tlsportname1/proxy/: tls baz (200; 4.530434ms)
Aug 13 18:59:09.257: INFO: (19) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 4.519982ms)
Aug 13 18:59:09.257: INFO: (19) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname1/proxy/: foo (200; 4.80793ms)
Aug 13 18:59:09.257: INFO: (19) /api/v1/namespaces/proxy-256/services/http:proxy-service-2l5lf:portname2/proxy/: bar (200; 4.714071ms)
Aug 13 18:59:09.257: INFO: (19) /api/v1/namespaces/proxy-256/services/proxy-service-2l5lf:portname2/proxy/: bar (200; 4.733344ms)
Aug 13 18:59:09.257: INFO: (19) /api/v1/namespaces/proxy-256/pods/http:proxy-service-2l5lf-4cx5t:1080/proxy/: t... (200; 4.709358ms)
Aug 13 18:59:09.257: INFO: (19) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:160/proxy/: foo (200; 4.646565ms)
Aug 13 18:59:09.257: INFO: (19) /api/v1/namespaces/proxy-256/pods/proxy-service-2l5lf-4cx5t:1080/proxy/: testtest (200; 5.419831ms)
STEP: deleting ReplicationController proxy-service-2l5lf in namespace proxy-256, will wait for the garbage collector to delete the pods
Aug 13 18:59:09.317: INFO: Deleting ReplicationController proxy-service-2l5lf took: 7.408531ms
Aug 13 18:59:09.417: INFO: Terminating ReplicationController proxy-service-2l5lf pods took: 100.202417ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:59:12.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-256" for this suite.

• [SLOW TEST:14.447 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:59
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":275,"completed":160,"skipped":2559,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:59:12.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1206
STEP: creating the pod
Aug 13 18:59:12.303: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2433'
Aug 13 18:59:12.537: INFO: stderr: ""
Aug 13 18:59:12.537: INFO: stdout: "pod/pause created\n"
Aug 13 18:59:12.537: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 13 18:59:12.537: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2433" to be "running and ready"
Aug 13 18:59:12.554: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 16.543818ms
Aug 13 18:59:14.558: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02064807s
Aug 13 18:59:16.562: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.024951179s
Aug 13 18:59:16.562: INFO: Pod "pause" satisfied condition "running and ready"
Aug 13 18:59:16.562: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 13 18:59:16.562: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2433'
Aug 13 18:59:16.671: INFO: stderr: ""
Aug 13 18:59:16.671: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 13 18:59:16.671: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2433'
Aug 13 18:59:16.769: INFO: stderr: ""
Aug 13 18:59:16.769: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 13 18:59:16.769: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2433'
Aug 13 18:59:16.871: INFO: stderr: ""
Aug 13 18:59:16.871: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 13 18:59:16.871: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2433'
Aug 13 18:59:16.974: INFO: stderr: ""
Aug 13 18:59:16.974: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          4s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1213
STEP: using delete to clean up resources
Aug 13 18:59:16.974: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2433'
Aug 13 18:59:17.126: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 13 18:59:17.126: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 13 18:59:17.126: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2433'
Aug 13 18:59:17.755: INFO: stderr: "No resources found in kubectl-2433 namespace.\n"
Aug 13 18:59:17.755: INFO: stdout: ""
Aug 13 18:59:17.755: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2433 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 13 18:59:17.898: INFO: stderr: ""
Aug 13 18:59:17.898: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:59:17.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2433" for this suite.

• [SLOW TEST:5.677 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1203
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":275,"completed":161,"skipped":2602,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:59:17.931: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 18:59:17.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-506" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":275,"completed":162,"skipped":2613,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 18:59:18.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name cm-test-opt-del-f9063662-79dc-427f-90bc-a70133e8e316
STEP: Creating configMap with name cm-test-opt-upd-a3c2b95d-ef9d-4ff9-ad5f-a7914eb858eb
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-f9063662-79dc-427f-90bc-a70133e8e316
STEP: Updating configmap cm-test-opt-upd-a3c2b95d-ef9d-4ff9-ad5f-a7914eb858eb
STEP: Creating configMap with name cm-test-opt-create-6c453fe0-6570-47b1-8295-d63fb817b6e8
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:00:33.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2714" for this suite.

• [SLOW TEST:75.676 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":163,"skipped":2625,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:00:33.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 13 19:00:35.397: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 13 19:00:37.405: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942035, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942035, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942035, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942034, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 19:00:39.409: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942035, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942035, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942035, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942034, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 13 19:00:42.873: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:00:43.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3429" for this suite.
STEP: Destroying namespace "webhook-3429-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.579 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":275,"completed":164,"skipped":2629,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:00:44.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:178
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:00:46.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:00:54.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2888" for this suite.

• [SLOW TEST:10.260 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":275,"completed":165,"skipped":2638,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:00:54.556: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 13 19:01:05.255: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 13 19:01:06.440: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 13 19:01:08.441: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 13 19:01:08.445: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 13 19:01:10.441: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 13 19:01:10.775: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 13 19:01:12.441: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 13 19:01:12.565: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 13 19:01:14.441: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 13 19:01:14.470: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 13 19:01:16.441: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 13 19:01:16.859: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:01:17.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-234" for this suite.

• [SLOW TEST:23.470 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":275,"completed":166,"skipped":2656,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:01:18.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:01:34.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3972" for this suite.

• [SLOW TEST:16.194 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":275,"completed":167,"skipped":2682,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:01:34.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 13 19:01:35.950: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 13 19:01:37.958: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942096, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942096, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942096, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942095, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 19:01:40.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942096, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942096, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942096, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732942095, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 13 19:01:43.014: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:01:45.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4799" for this suite.
STEP: Destroying namespace "webhook-4799-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:11.424 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":275,"completed":168,"skipped":2700,"failed":0}
SSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:01:45.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod liveness-343d795a-5b60-47b4-8f73-0b8a78e0787f in namespace container-probe-4276
Aug 13 19:01:51.554: INFO: Started pod liveness-343d795a-5b60-47b4-8f73-0b8a78e0787f in namespace container-probe-4276
STEP: checking the pod's current state and verifying that restartCount is present
Aug 13 19:01:51.594: INFO: Initial restart count of pod liveness-343d795a-5b60-47b4-8f73-0b8a78e0787f is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:05:52.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4276" for this suite.

• [SLOW TEST:247.164 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":275,"completed":169,"skipped":2704,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:05:52.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:05:53.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Aug 13 19:05:56.153: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4912 create -f -'
Aug 13 19:06:04.755: INFO: stderr: ""
Aug 13 19:06:04.756: INFO: stdout: "e2e-test-crd-publish-openapi-4713-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 13 19:06:04.756: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4912 delete e2e-test-crd-publish-openapi-4713-crds test-cr'
Aug 13 19:06:04.933: INFO: stderr: ""
Aug 13 19:06:04.933: INFO: stdout: "e2e-test-crd-publish-openapi-4713-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Aug 13 19:06:04.934: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4912 apply -f -'
Aug 13 19:06:05.221: INFO: stderr: ""
Aug 13 19:06:05.221: INFO: stdout: "e2e-test-crd-publish-openapi-4713-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Aug 13 19:06:05.221: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4912 delete e2e-test-crd-publish-openapi-4713-crds test-cr'
Aug 13 19:06:05.333: INFO: stderr: ""
Aug 13 19:06:05.333: INFO: stdout: "e2e-test-crd-publish-openapi-4713-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Aug 13 19:06:05.333: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4713-crds'
Aug 13 19:06:05.631: INFO: stderr: ""
Aug 13 19:06:05.631: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4713-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:06:07.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4912" for this suite.

• [SLOW TEST:14.747 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":275,"completed":170,"skipped":2796,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:06:07.560: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating service endpoint-test2 in namespace services-6284
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6284 to expose endpoints map[]
Aug 13 19:06:07.872: INFO: Get endpoints failed (3.974206ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Aug 13 19:06:08.876: INFO: successfully validated that service endpoint-test2 in namespace services-6284 exposes endpoints map[] (1.008511293s elapsed)
STEP: Creating pod pod1 in namespace services-6284
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6284 to expose endpoints map[pod1:[80]]
Aug 13 19:06:13.257: INFO: successfully validated that service endpoint-test2 in namespace services-6284 exposes endpoints map[pod1:[80]] (4.373307212s elapsed)
STEP: Creating pod pod2 in namespace services-6284
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6284 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 13 19:06:16.588: INFO: successfully validated that service endpoint-test2 in namespace services-6284 exposes endpoints map[pod1:[80] pod2:[80]] (3.327415634s elapsed)
STEP: Deleting pod pod1 in namespace services-6284
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6284 to expose endpoints map[pod2:[80]]
Aug 13 19:06:17.646: INFO: successfully validated that service endpoint-test2 in namespace services-6284 exposes endpoints map[pod2:[80]] (1.053537931s elapsed)
STEP: Deleting pod pod2 in namespace services-6284
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6284 to expose endpoints map[]
Aug 13 19:06:18.721: INFO: successfully validated that service endpoint-test2 in namespace services-6284 exposes endpoints map[] (1.070354721s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:06:18.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6284" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:11.358 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":275,"completed":171,"skipped":2829,"failed":0}
SS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:06:18.919: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override arguments
Aug 13 19:06:19.050: INFO: Waiting up to 5m0s for pod "client-containers-74310c49-fdb0-4712-adcc-530db15099ca" in namespace "containers-6006" to be "Succeeded or Failed"
Aug 13 19:06:19.079: INFO: Pod "client-containers-74310c49-fdb0-4712-adcc-530db15099ca": Phase="Pending", Reason="", readiness=false. Elapsed: 29.131305ms
Aug 13 19:06:21.083: INFO: Pod "client-containers-74310c49-fdb0-4712-adcc-530db15099ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033467255s
Aug 13 19:06:23.087: INFO: Pod "client-containers-74310c49-fdb0-4712-adcc-530db15099ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037609071s
Aug 13 19:06:25.234: INFO: Pod "client-containers-74310c49-fdb0-4712-adcc-530db15099ca": Phase="Running", Reason="", readiness=true. Elapsed: 6.184152874s
Aug 13 19:06:27.237: INFO: Pod "client-containers-74310c49-fdb0-4712-adcc-530db15099ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.187726582s
STEP: Saw pod success
Aug 13 19:06:27.237: INFO: Pod "client-containers-74310c49-fdb0-4712-adcc-530db15099ca" satisfied condition "Succeeded or Failed"
Aug 13 19:06:27.241: INFO: Trying to get logs from node kali-worker pod client-containers-74310c49-fdb0-4712-adcc-530db15099ca container test-container: 
STEP: delete the pod
Aug 13 19:06:27.367: INFO: Waiting for pod client-containers-74310c49-fdb0-4712-adcc-530db15099ca to disappear
Aug 13 19:06:27.372: INFO: Pod client-containers-74310c49-fdb0-4712-adcc-530db15099ca no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:06:27.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-6006" for this suite.

• [SLOW TEST:8.460 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":275,"completed":172,"skipped":2831,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:06:27.378: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-4cdc1fd5-9bc9-4c65-b6d8-dae89c6f60a7
STEP: Creating a pod to test consume secrets
Aug 13 19:06:27.865: INFO: Waiting up to 5m0s for pod "pod-secrets-444a16f4-bf53-42fd-b9c2-fb555aa716ff" in namespace "secrets-5099" to be "Succeeded or Failed"
Aug 13 19:06:27.869: INFO: Pod "pod-secrets-444a16f4-bf53-42fd-b9c2-fb555aa716ff": Phase="Pending", Reason="", readiness=false. Elapsed: 3.826587ms
Aug 13 19:06:29.922: INFO: Pod "pod-secrets-444a16f4-bf53-42fd-b9c2-fb555aa716ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056888365s
Aug 13 19:06:31.945: INFO: Pod "pod-secrets-444a16f4-bf53-42fd-b9c2-fb555aa716ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079949765s
Aug 13 19:06:33.970: INFO: Pod "pod-secrets-444a16f4-bf53-42fd-b9c2-fb555aa716ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104681606s
STEP: Saw pod success
Aug 13 19:06:33.970: INFO: Pod "pod-secrets-444a16f4-bf53-42fd-b9c2-fb555aa716ff" satisfied condition "Succeeded or Failed"
Aug 13 19:06:33.973: INFO: Trying to get logs from node kali-worker pod pod-secrets-444a16f4-bf53-42fd-b9c2-fb555aa716ff container secret-volume-test: 
STEP: delete the pod
Aug 13 19:06:34.138: INFO: Waiting for pod pod-secrets-444a16f4-bf53-42fd-b9c2-fb555aa716ff to disappear
Aug 13 19:06:34.174: INFO: Pod pod-secrets-444a16f4-bf53-42fd-b9c2-fb555aa716ff no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:06:34.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5099" for this suite.

• [SLOW TEST:7.043 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":173,"skipped":2835,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:06:34.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 13 19:06:49.409: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6567 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 19:06:49.409: INFO: >>> kubeConfig: /root/.kube/config
I0813 19:06:49.437763       7 log.go:172] (0xc002b402c0) (0xc000fee280) Create stream
I0813 19:06:49.437791       7 log.go:172] (0xc002b402c0) (0xc000fee280) Stream added, broadcasting: 1
I0813 19:06:49.439578       7 log.go:172] (0xc002b402c0) Reply frame received for 1
I0813 19:06:49.439616       7 log.go:172] (0xc002b402c0) (0xc0013f80a0) Create stream
I0813 19:06:49.439628       7 log.go:172] (0xc002b402c0) (0xc0013f80a0) Stream added, broadcasting: 3
I0813 19:06:49.440547       7 log.go:172] (0xc002b402c0) Reply frame received for 3
I0813 19:06:49.440579       7 log.go:172] (0xc002b402c0) (0xc000fee320) Create stream
I0813 19:06:49.440591       7 log.go:172] (0xc002b402c0) (0xc000fee320) Stream added, broadcasting: 5
I0813 19:06:49.441669       7 log.go:172] (0xc002b402c0) Reply frame received for 5
I0813 19:06:49.497709       7 log.go:172] (0xc002b402c0) Data frame received for 3
I0813 19:06:49.497749       7 log.go:172] (0xc0013f80a0) (3) Data frame handling
I0813 19:06:49.497761       7 log.go:172] (0xc0013f80a0) (3) Data frame sent
I0813 19:06:49.497784       7 log.go:172] (0xc002b402c0) Data frame received for 3
I0813 19:06:49.497801       7 log.go:172] (0xc0013f80a0) (3) Data frame handling
I0813 19:06:49.497824       7 log.go:172] (0xc002b402c0) Data frame received for 5
I0813 19:06:49.497835       7 log.go:172] (0xc000fee320) (5) Data frame handling
I0813 19:06:49.499087       7 log.go:172] (0xc002b402c0) Data frame received for 1
I0813 19:06:49.499106       7 log.go:172] (0xc000fee280) (1) Data frame handling
I0813 19:06:49.499116       7 log.go:172] (0xc000fee280) (1) Data frame sent
I0813 19:06:49.499144       7 log.go:172] (0xc002b402c0) (0xc000fee280) Stream removed, broadcasting: 1
I0813 19:06:49.499170       7 log.go:172] (0xc002b402c0) Go away received
I0813 19:06:49.499284       7 log.go:172] (0xc002b402c0) (0xc000fee280) Stream removed, broadcasting: 1
I0813 19:06:49.499313       7 log.go:172] (0xc002b402c0) (0xc0013f80a0) Stream removed, broadcasting: 3
I0813 19:06:49.499330       7 log.go:172] (0xc002b402c0) (0xc000fee320) Stream removed, broadcasting: 5
Aug 13 19:06:49.499: INFO: Exec stderr: ""
Aug 13 19:06:49.499: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6567 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 19:06:49.499: INFO: >>> kubeConfig: /root/.kube/config
I0813 19:06:49.766233       7 log.go:172] (0xc001c4a6e0) (0xc001bb90e0) Create stream
I0813 19:06:49.766263       7 log.go:172] (0xc001c4a6e0) (0xc001bb90e0) Stream added, broadcasting: 1
I0813 19:06:49.767914       7 log.go:172] (0xc001c4a6e0) Reply frame received for 1
I0813 19:06:49.767961       7 log.go:172] (0xc001c4a6e0) (0xc001bb9220) Create stream
I0813 19:06:49.767980       7 log.go:172] (0xc001c4a6e0) (0xc001bb9220) Stream added, broadcasting: 3
I0813 19:06:49.768861       7 log.go:172] (0xc001c4a6e0) Reply frame received for 3
I0813 19:06:49.768903       7 log.go:172] (0xc001c4a6e0) (0xc000fee460) Create stream
I0813 19:06:49.768916       7 log.go:172] (0xc001c4a6e0) (0xc000fee460) Stream added, broadcasting: 5
I0813 19:06:49.769729       7 log.go:172] (0xc001c4a6e0) Reply frame received for 5
I0813 19:06:49.819950       7 log.go:172] (0xc001c4a6e0) Data frame received for 3
I0813 19:06:49.819986       7 log.go:172] (0xc001bb9220) (3) Data frame handling
I0813 19:06:49.819995       7 log.go:172] (0xc001bb9220) (3) Data frame sent
I0813 19:06:49.820000       7 log.go:172] (0xc001c4a6e0) Data frame received for 3
I0813 19:06:49.820004       7 log.go:172] (0xc001bb9220) (3) Data frame handling
I0813 19:06:49.820020       7 log.go:172] (0xc001c4a6e0) Data frame received for 5
I0813 19:06:49.820030       7 log.go:172] (0xc000fee460) (5) Data frame handling
I0813 19:06:49.821259       7 log.go:172] (0xc001c4a6e0) Data frame received for 1
I0813 19:06:49.821290       7 log.go:172] (0xc001bb90e0) (1) Data frame handling
I0813 19:06:49.821308       7 log.go:172] (0xc001bb90e0) (1) Data frame sent
I0813 19:06:49.821323       7 log.go:172] (0xc001c4a6e0) (0xc001bb90e0) Stream removed, broadcasting: 1
I0813 19:06:49.821340       7 log.go:172] (0xc001c4a6e0) Go away received
I0813 19:06:49.821441       7 log.go:172] (0xc001c4a6e0) (0xc001bb90e0) Stream removed, broadcasting: 1
I0813 19:06:49.821456       7 log.go:172] (0xc001c4a6e0) (0xc001bb9220) Stream removed, broadcasting: 3
I0813 19:06:49.821466       7 log.go:172] (0xc001c4a6e0) (0xc000fee460) Stream removed, broadcasting: 5
Aug 13 19:06:49.821: INFO: Exec stderr: ""
Aug 13 19:06:49.821: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6567 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 19:06:49.821: INFO: >>> kubeConfig: /root/.kube/config
I0813 19:06:49.847144       7 log.go:172] (0xc002b408f0) (0xc000fee780) Create stream
I0813 19:06:49.847170       7 log.go:172] (0xc002b408f0) (0xc000fee780) Stream added, broadcasting: 1
I0813 19:06:49.849682       7 log.go:172] (0xc002b408f0) Reply frame received for 1
I0813 19:06:49.849715       7 log.go:172] (0xc002b408f0) (0xc001bb9360) Create stream
I0813 19:06:49.849726       7 log.go:172] (0xc002b408f0) (0xc001bb9360) Stream added, broadcasting: 3
I0813 19:06:49.850684       7 log.go:172] (0xc002b408f0) Reply frame received for 3
I0813 19:06:49.850732       7 log.go:172] (0xc002b408f0) (0xc001722e60) Create stream
I0813 19:06:49.850749       7 log.go:172] (0xc002b408f0) (0xc001722e60) Stream added, broadcasting: 5
I0813 19:06:49.851754       7 log.go:172] (0xc002b408f0) Reply frame received for 5
I0813 19:06:49.921313       7 log.go:172] (0xc002b408f0) Data frame received for 5
I0813 19:06:49.921341       7 log.go:172] (0xc001722e60) (5) Data frame handling
I0813 19:06:49.921373       7 log.go:172] (0xc002b408f0) Data frame received for 3
I0813 19:06:49.921400       7 log.go:172] (0xc001bb9360) (3) Data frame handling
I0813 19:06:49.921417       7 log.go:172] (0xc001bb9360) (3) Data frame sent
I0813 19:06:49.921423       7 log.go:172] (0xc002b408f0) Data frame received for 3
I0813 19:06:49.921429       7 log.go:172] (0xc001bb9360) (3) Data frame handling
I0813 19:06:49.928826       7 log.go:172] (0xc002b408f0) Data frame received for 1
I0813 19:06:49.928846       7 log.go:172] (0xc000fee780) (1) Data frame handling
I0813 19:06:49.928855       7 log.go:172] (0xc000fee780) (1) Data frame sent
I0813 19:06:49.928866       7 log.go:172] (0xc002b408f0) (0xc000fee780) Stream removed, broadcasting: 1
I0813 19:06:49.928962       7 log.go:172] (0xc002b408f0) (0xc000fee780) Stream removed, broadcasting: 1
I0813 19:06:49.928972       7 log.go:172] (0xc002b408f0) (0xc001bb9360) Stream removed, broadcasting: 3
I0813 19:06:49.928978       7 log.go:172] (0xc002b408f0) (0xc001722e60) Stream removed, broadcasting: 5
Aug 13 19:06:49.928: INFO: Exec stderr: ""
I0813 19:06:49.929004       7 log.go:172] (0xc002b408f0) Go away received
Aug 13 19:06:49.929: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6567 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 19:06:49.929: INFO: >>> kubeConfig: /root/.kube/config
I0813 19:06:49.953317       7 log.go:172] (0xc002e2e370) (0xc000d80b40) Create stream
I0813 19:06:49.953343       7 log.go:172] (0xc002e2e370) (0xc000d80b40) Stream added, broadcasting: 1
I0813 19:06:49.954955       7 log.go:172] (0xc002e2e370) Reply frame received for 1
I0813 19:06:49.954999       7 log.go:172] (0xc002e2e370) (0xc0013f8640) Create stream
I0813 19:06:49.955010       7 log.go:172] (0xc002e2e370) (0xc0013f8640) Stream added, broadcasting: 3
I0813 19:06:49.955805       7 log.go:172] (0xc002e2e370) Reply frame received for 3
I0813 19:06:49.955864       7 log.go:172] (0xc002e2e370) (0xc0013f8820) Create stream
I0813 19:06:49.955877       7 log.go:172] (0xc002e2e370) (0xc0013f8820) Stream added, broadcasting: 5
I0813 19:06:49.956587       7 log.go:172] (0xc002e2e370) Reply frame received for 5
I0813 19:06:50.021521       7 log.go:172] (0xc002e2e370) Data frame received for 5
I0813 19:06:50.021561       7 log.go:172] (0xc0013f8820) (5) Data frame handling
I0813 19:06:50.021583       7 log.go:172] (0xc002e2e370) Data frame received for 3
I0813 19:06:50.021594       7 log.go:172] (0xc0013f8640) (3) Data frame handling
I0813 19:06:50.021604       7 log.go:172] (0xc0013f8640) (3) Data frame sent
I0813 19:06:50.021616       7 log.go:172] (0xc002e2e370) Data frame received for 3
I0813 19:06:50.021624       7 log.go:172] (0xc0013f8640) (3) Data frame handling
I0813 19:06:50.023268       7 log.go:172] (0xc002e2e370) Data frame received for 1
I0813 19:06:50.023298       7 log.go:172] (0xc000d80b40) (1) Data frame handling
I0813 19:06:50.023314       7 log.go:172] (0xc000d80b40) (1) Data frame sent
I0813 19:06:50.023336       7 log.go:172] (0xc002e2e370) (0xc000d80b40) Stream removed, broadcasting: 1
I0813 19:06:50.023358       7 log.go:172] (0xc002e2e370) Go away received
I0813 19:06:50.023454       7 log.go:172] (0xc002e2e370) (0xc000d80b40) Stream removed, broadcasting: 1
I0813 19:06:50.023469       7 log.go:172] (0xc002e2e370) (0xc0013f8640) Stream removed, broadcasting: 3
I0813 19:06:50.023480       7 log.go:172] (0xc002e2e370) (0xc0013f8820) Stream removed, broadcasting: 5
Aug 13 19:06:50.023: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 13 19:06:50.023: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6567 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 19:06:50.023: INFO: >>> kubeConfig: /root/.kube/config
I0813 19:06:50.051942       7 log.go:172] (0xc002e2e9a0) (0xc000d80f00) Create stream
I0813 19:06:50.051963       7 log.go:172] (0xc002e2e9a0) (0xc000d80f00) Stream added, broadcasting: 1
I0813 19:06:50.053818       7 log.go:172] (0xc002e2e9a0) Reply frame received for 1
I0813 19:06:50.053848       7 log.go:172] (0xc002e2e9a0) (0xc0013f88c0) Create stream
I0813 19:06:50.053862       7 log.go:172] (0xc002e2e9a0) (0xc0013f88c0) Stream added, broadcasting: 3
I0813 19:06:50.054760       7 log.go:172] (0xc002e2e9a0) Reply frame received for 3
I0813 19:06:50.054802       7 log.go:172] (0xc002e2e9a0) (0xc001bb97c0) Create stream
I0813 19:06:50.054818       7 log.go:172] (0xc002e2e9a0) (0xc001bb97c0) Stream added, broadcasting: 5
I0813 19:06:50.055734       7 log.go:172] (0xc002e2e9a0) Reply frame received for 5
I0813 19:06:50.124572       7 log.go:172] (0xc002e2e9a0) Data frame received for 3
I0813 19:06:50.124624       7 log.go:172] (0xc0013f88c0) (3) Data frame handling
I0813 19:06:50.124657       7 log.go:172] (0xc002e2e9a0) Data frame received for 5
I0813 19:06:50.124694       7 log.go:172] (0xc001bb97c0) (5) Data frame handling
I0813 19:06:50.124825       7 log.go:172] (0xc0013f88c0) (3) Data frame sent
I0813 19:06:50.124855       7 log.go:172] (0xc002e2e9a0) Data frame received for 3
I0813 19:06:50.124874       7 log.go:172] (0xc0013f88c0) (3) Data frame handling
I0813 19:06:50.125975       7 log.go:172] (0xc002e2e9a0) Data frame received for 1
I0813 19:06:50.125995       7 log.go:172] (0xc000d80f00) (1) Data frame handling
I0813 19:06:50.126005       7 log.go:172] (0xc000d80f00) (1) Data frame sent
I0813 19:06:50.126151       7 log.go:172] (0xc002e2e9a0) (0xc000d80f00) Stream removed, broadcasting: 1
I0813 19:06:50.126194       7 log.go:172] (0xc002e2e9a0) Go away received
I0813 19:06:50.126313       7 log.go:172] (0xc002e2e9a0) (0xc000d80f00) Stream removed, broadcasting: 1
I0813 19:06:50.126338       7 log.go:172] (0xc002e2e9a0) (0xc0013f88c0) Stream removed, broadcasting: 3
I0813 19:06:50.126356       7 log.go:172] (0xc002e2e9a0) (0xc001bb97c0) Stream removed, broadcasting: 5
Aug 13 19:06:50.126: INFO: Exec stderr: ""
Aug 13 19:06:50.126: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6567 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 19:06:50.126: INFO: >>> kubeConfig: /root/.kube/config
I0813 19:06:50.157879       7 log.go:172] (0xc002e2efd0) (0xc000d81680) Create stream
I0813 19:06:50.157916       7 log.go:172] (0xc002e2efd0) (0xc000d81680) Stream added, broadcasting: 1
I0813 19:06:50.160456       7 log.go:172] (0xc002e2efd0) Reply frame received for 1
I0813 19:06:50.160503       7 log.go:172] (0xc002e2efd0) (0xc001722fa0) Create stream
I0813 19:06:50.160517       7 log.go:172] (0xc002e2efd0) (0xc001722fa0) Stream added, broadcasting: 3
I0813 19:06:50.161685       7 log.go:172] (0xc002e2efd0) Reply frame received for 3
I0813 19:06:50.161717       7 log.go:172] (0xc002e2efd0) (0xc000feea00) Create stream
I0813 19:06:50.161735       7 log.go:172] (0xc002e2efd0) (0xc000feea00) Stream added, broadcasting: 5
I0813 19:06:50.162633       7 log.go:172] (0xc002e2efd0) Reply frame received for 5
I0813 19:06:50.225654       7 log.go:172] (0xc002e2efd0) Data frame received for 3
I0813 19:06:50.225692       7 log.go:172] (0xc001722fa0) (3) Data frame handling
I0813 19:06:50.225708       7 log.go:172] (0xc001722fa0) (3) Data frame sent
I0813 19:06:50.225805       7 log.go:172] (0xc002e2efd0) Data frame received for 3
I0813 19:06:50.225831       7 log.go:172] (0xc001722fa0) (3) Data frame handling
I0813 19:06:50.226299       7 log.go:172] (0xc002e2efd0) Data frame received for 5
I0813 19:06:50.226328       7 log.go:172] (0xc000feea00) (5) Data frame handling
I0813 19:06:50.227336       7 log.go:172] (0xc002e2efd0) Data frame received for 1
I0813 19:06:50.227365       7 log.go:172] (0xc000d81680) (1) Data frame handling
I0813 19:06:50.227399       7 log.go:172] (0xc000d81680) (1) Data frame sent
I0813 19:06:50.227422       7 log.go:172] (0xc002e2efd0) (0xc000d81680) Stream removed, broadcasting: 1
I0813 19:06:50.227439       7 log.go:172] (0xc002e2efd0) Go away received
I0813 19:06:50.227548       7 log.go:172] (0xc002e2efd0) (0xc000d81680) Stream removed, broadcasting: 1
I0813 19:06:50.227581       7 log.go:172] (0xc002e2efd0) (0xc001722fa0) Stream removed, broadcasting: 3
I0813 19:06:50.227604       7 log.go:172] (0xc002e2efd0) (0xc000feea00) Stream removed, broadcasting: 5
Aug 13 19:06:50.227: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 13 19:06:50.227: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6567 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 19:06:50.227: INFO: >>> kubeConfig: /root/.kube/config
I0813 19:06:50.264833       7 log.go:172] (0xc00213ca50) (0xc001723400) Create stream
I0813 19:06:50.264874       7 log.go:172] (0xc00213ca50) (0xc001723400) Stream added, broadcasting: 1
I0813 19:06:50.267182       7 log.go:172] (0xc00213ca50) Reply frame received for 1
I0813 19:06:50.267220       7 log.go:172] (0xc00213ca50) (0xc001723540) Create stream
I0813 19:06:50.267230       7 log.go:172] (0xc00213ca50) (0xc001723540) Stream added, broadcasting: 3
I0813 19:06:50.268062       7 log.go:172] (0xc00213ca50) Reply frame received for 3
I0813 19:06:50.268102       7 log.go:172] (0xc00213ca50) (0xc001723680) Create stream
I0813 19:06:50.268116       7 log.go:172] (0xc00213ca50) (0xc001723680) Stream added, broadcasting: 5
I0813 19:06:50.269055       7 log.go:172] (0xc00213ca50) Reply frame received for 5
I0813 19:06:50.342344       7 log.go:172] (0xc00213ca50) Data frame received for 5
I0813 19:06:50.342385       7 log.go:172] (0xc001723680) (5) Data frame handling
I0813 19:06:50.342413       7 log.go:172] (0xc00213ca50) Data frame received for 3
I0813 19:06:50.342427       7 log.go:172] (0xc001723540) (3) Data frame handling
I0813 19:06:50.342443       7 log.go:172] (0xc001723540) (3) Data frame sent
I0813 19:06:50.342457       7 log.go:172] (0xc00213ca50) Data frame received for 3
I0813 19:06:50.342470       7 log.go:172] (0xc001723540) (3) Data frame handling
I0813 19:06:50.344027       7 log.go:172] (0xc00213ca50) Data frame received for 1
I0813 19:06:50.344062       7 log.go:172] (0xc001723400) (1) Data frame handling
I0813 19:06:50.344093       7 log.go:172] (0xc001723400) (1) Data frame sent
I0813 19:06:50.344120       7 log.go:172] (0xc00213ca50) (0xc001723400) Stream removed, broadcasting: 1
I0813 19:06:50.344180       7 log.go:172] (0xc00213ca50) (0xc001723400) Stream removed, broadcasting: 1
I0813 19:06:50.344196       7 log.go:172] (0xc00213ca50) (0xc001723540) Stream removed, broadcasting: 3
I0813 19:06:50.344254       7 log.go:172] (0xc00213ca50) Go away received
I0813 19:06:50.344405       7 log.go:172] (0xc00213ca50) (0xc001723680) Stream removed, broadcasting: 5
Aug 13 19:06:50.344: INFO: Exec stderr: ""
Aug 13 19:06:50.344: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6567 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 19:06:50.344: INFO: >>> kubeConfig: /root/.kube/config
I0813 19:06:50.374301       7 log.go:172] (0xc002e2f600) (0xc000d81900) Create stream
I0813 19:06:50.374327       7 log.go:172] (0xc002e2f600) (0xc000d81900) Stream added, broadcasting: 1
I0813 19:06:50.376594       7 log.go:172] (0xc002e2f600) Reply frame received for 1
I0813 19:06:50.376633       7 log.go:172] (0xc002e2f600) (0xc001bb9a40) Create stream
I0813 19:06:50.376650       7 log.go:172] (0xc002e2f600) (0xc001bb9a40) Stream added, broadcasting: 3
I0813 19:06:50.377746       7 log.go:172] (0xc002e2f600) Reply frame received for 3
I0813 19:06:50.377789       7 log.go:172] (0xc002e2f600) (0xc001bb9c20) Create stream
I0813 19:06:50.377797       7 log.go:172] (0xc002e2f600) (0xc001bb9c20) Stream added, broadcasting: 5
I0813 19:06:50.378619       7 log.go:172] (0xc002e2f600) Reply frame received for 5
I0813 19:06:50.437252       7 log.go:172] (0xc002e2f600) Data frame received for 5
I0813 19:06:50.437295       7 log.go:172] (0xc001bb9c20) (5) Data frame handling
I0813 19:06:50.437312       7 log.go:172] (0xc002e2f600) Data frame received for 3
I0813 19:06:50.437326       7 log.go:172] (0xc001bb9a40) (3) Data frame handling
I0813 19:06:50.437333       7 log.go:172] (0xc001bb9a40) (3) Data frame sent
I0813 19:06:50.437337       7 log.go:172] (0xc002e2f600) Data frame received for 3
I0813 19:06:50.437341       7 log.go:172] (0xc001bb9a40) (3) Data frame handling
I0813 19:06:50.438687       7 log.go:172] (0xc002e2f600) Data frame received for 1
I0813 19:06:50.438698       7 log.go:172] (0xc000d81900) (1) Data frame handling
I0813 19:06:50.438706       7 log.go:172] (0xc000d81900) (1) Data frame sent
I0813 19:06:50.438715       7 log.go:172] (0xc002e2f600) (0xc000d81900) Stream removed, broadcasting: 1
I0813 19:06:50.438775       7 log.go:172] (0xc002e2f600) (0xc000d81900) Stream removed, broadcasting: 1
I0813 19:06:50.438791       7 log.go:172] (0xc002e2f600) (0xc001bb9a40) Stream removed, broadcasting: 3
I0813 19:06:50.438813       7 log.go:172] (0xc002e2f600) (0xc001bb9c20) Stream removed, broadcasting: 5
Aug 13 19:06:50.438: INFO: Exec stderr: ""
Aug 13 19:06:50.438: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-6567 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 19:06:50.438: INFO: >>> kubeConfig: /root/.kube/config
I0813 19:06:50.438893       7 log.go:172] (0xc002e2f600) Go away received
I0813 19:06:50.472932       7 log.go:172] (0xc001c4ad10) (0xc001faa6e0) Create stream
I0813 19:06:50.472975       7 log.go:172] (0xc001c4ad10) (0xc001faa6e0) Stream added, broadcasting: 1
I0813 19:06:50.479762       7 log.go:172] (0xc001c4ad10) Reply frame received for 1
I0813 19:06:50.479796       7 log.go:172] (0xc001c4ad10) (0xc000d81ae0) Create stream
I0813 19:06:50.479807       7 log.go:172] (0xc001c4ad10) (0xc000d81ae0) Stream added, broadcasting: 3
I0813 19:06:50.481004       7 log.go:172] (0xc001c4ad10) Reply frame received for 3
I0813 19:06:50.481074       7 log.go:172] (0xc001c4ad10) (0xc000feeaa0) Create stream
I0813 19:06:50.481089       7 log.go:172] (0xc001c4ad10) (0xc000feeaa0) Stream added, broadcasting: 5
I0813 19:06:50.482047       7 log.go:172] (0xc001c4ad10) Reply frame received for 5
I0813 19:06:50.541049       7 log.go:172] (0xc001c4ad10) Data frame received for 5
I0813 19:06:50.541075       7 log.go:172] (0xc000feeaa0) (5) Data frame handling
I0813 19:06:50.541113       7 log.go:172] (0xc001c4ad10) Data frame received for 3
I0813 19:06:50.541127       7 log.go:172] (0xc000d81ae0) (3) Data frame handling
I0813 19:06:50.541141       7 log.go:172] (0xc000d81ae0) (3) Data frame sent
I0813 19:06:50.541152       7 log.go:172] (0xc001c4ad10) Data frame received for 3
I0813 19:06:50.541162       7 log.go:172] (0xc000d81ae0) (3) Data frame handling
I0813 19:06:50.542369       7 log.go:172] (0xc001c4ad10) Data frame received for 1
I0813 19:06:50.542386       7 log.go:172] (0xc001faa6e0) (1) Data frame handling
I0813 19:06:50.542401       7 log.go:172] (0xc001faa6e0) (1) Data frame sent
I0813 19:06:50.542412       7 log.go:172] (0xc001c4ad10) (0xc001faa6e0) Stream removed, broadcasting: 1
I0813 19:06:50.542436       7 log.go:172] (0xc001c4ad10) Go away received
I0813 19:06:50.542564       7 log.go:172] (0xc001c4ad10) (0xc001faa6e0) Stream removed, broadcasting: 1
I0813 19:06:50.542581       7 log.go:172] (0xc001c4ad10) (0xc000d81ae0) Stream removed, broadcasting: 3
I0813 19:06:50.542591       7 log.go:172] (0xc001c4ad10) (0xc000feeaa0) Stream removed, broadcasting: 5
Aug 13 19:06:50.542: INFO: Exec stderr: ""
Aug 13 19:06:50.542: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-6567 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 19:06:50.542: INFO: >>> kubeConfig: /root/.kube/config
I0813 19:06:50.570913       7 log.go:172] (0xc001c4b340) (0xc001faa960) Create stream
I0813 19:06:50.570936       7 log.go:172] (0xc001c4b340) (0xc001faa960) Stream added, broadcasting: 1
I0813 19:06:50.573047       7 log.go:172] (0xc001c4b340) Reply frame received for 1
I0813 19:06:50.573081       7 log.go:172] (0xc001c4b340) (0xc001faaa00) Create stream
I0813 19:06:50.573098       7 log.go:172] (0xc001c4b340) (0xc001faaa00) Stream added, broadcasting: 3
I0813 19:06:50.574115       7 log.go:172] (0xc001c4b340) Reply frame received for 3
I0813 19:06:50.574152       7 log.go:172] (0xc001c4b340) (0xc001faaaa0) Create stream
I0813 19:06:50.574172       7 log.go:172] (0xc001c4b340) (0xc001faaaa0) Stream added, broadcasting: 5
I0813 19:06:50.574872       7 log.go:172] (0xc001c4b340) Reply frame received for 5
I0813 19:06:50.644118       7 log.go:172] (0xc001c4b340) Data frame received for 3
I0813 19:06:50.644166       7 log.go:172] (0xc001faaa00) (3) Data frame handling
I0813 19:06:50.644193       7 log.go:172] (0xc001faaa00) (3) Data frame sent
I0813 19:06:50.644212       7 log.go:172] (0xc001c4b340) Data frame received for 3
I0813 19:06:50.644222       7 log.go:172] (0xc001faaa00) (3) Data frame handling
I0813 19:06:50.644260       7 log.go:172] (0xc001c4b340) Data frame received for 5
I0813 19:06:50.644277       7 log.go:172] (0xc001faaaa0) (5) Data frame handling
I0813 19:06:50.645542       7 log.go:172] (0xc001c4b340) Data frame received for 1
I0813 19:06:50.645555       7 log.go:172] (0xc001faa960) (1) Data frame handling
I0813 19:06:50.645572       7 log.go:172] (0xc001faa960) (1) Data frame sent
I0813 19:06:50.645585       7 log.go:172] (0xc001c4b340) (0xc001faa960) Stream removed, broadcasting: 1
I0813 19:06:50.645658       7 log.go:172] (0xc001c4b340) (0xc001faa960) Stream removed, broadcasting: 1
I0813 19:06:50.645670       7 log.go:172] (0xc001c4b340) (0xc001faaa00) Stream removed, broadcasting: 3
I0813 19:06:50.645742       7 log.go:172] (0xc001c4b340) Go away received
I0813 19:06:50.645775       7 log.go:172] (0xc001c4b340) (0xc001faaaa0) Stream removed, broadcasting: 5
Aug 13 19:06:50.645: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:06:50.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-6567" for this suite.

• [SLOW TEST:16.231 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":174,"skipped":2852,"failed":0}
SS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:06:50.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service externalname-service with the type=ExternalName in namespace services-5913
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-5913
I0813 19:06:50.935329       7 runners.go:190] Created replication controller with name: externalname-service, namespace: services-5913, replica count: 2
I0813 19:06:53.985746       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0813 19:06:56.985959       7 runners.go:190] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 13 19:06:56.986: INFO: Creating new exec pod
Aug 13 19:07:02.078: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-5913 execpod8w7gb -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Aug 13 19:07:02.281: INFO: stderr: "I0813 19:07:02.209320    2280 log.go:172] (0xc0009fd6b0) (0xc000b885a0) Create stream\nI0813 19:07:02.209380    2280 log.go:172] (0xc0009fd6b0) (0xc000b885a0) Stream added, broadcasting: 1\nI0813 19:07:02.211835    2280 log.go:172] (0xc0009fd6b0) Reply frame received for 1\nI0813 19:07:02.211902    2280 log.go:172] (0xc0009fd6b0) (0xc000b1e0a0) Create stream\nI0813 19:07:02.211924    2280 log.go:172] (0xc0009fd6b0) (0xc000b1e0a0) Stream added, broadcasting: 3\nI0813 19:07:02.213011    2280 log.go:172] (0xc0009fd6b0) Reply frame received for 3\nI0813 19:07:02.213048    2280 log.go:172] (0xc0009fd6b0) (0xc000b88640) Create stream\nI0813 19:07:02.213059    2280 log.go:172] (0xc0009fd6b0) (0xc000b88640) Stream added, broadcasting: 5\nI0813 19:07:02.214056    2280 log.go:172] (0xc0009fd6b0) Reply frame received for 5\nI0813 19:07:02.271170    2280 log.go:172] (0xc0009fd6b0) Data frame received for 5\nI0813 19:07:02.271198    2280 log.go:172] (0xc000b88640) (5) Data frame handling\nI0813 19:07:02.271215    2280 log.go:172] (0xc000b88640) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0813 19:07:02.271629    2280 log.go:172] (0xc0009fd6b0) Data frame received for 5\nI0813 19:07:02.271652    2280 log.go:172] (0xc000b88640) (5) Data frame handling\nI0813 19:07:02.271673    2280 log.go:172] (0xc000b88640) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0813 19:07:02.272121    2280 log.go:172] (0xc0009fd6b0) Data frame received for 5\nI0813 19:07:02.272142    2280 log.go:172] (0xc000b88640) (5) Data frame handling\nI0813 19:07:02.272183    2280 log.go:172] (0xc0009fd6b0) Data frame received for 3\nI0813 19:07:02.272211    2280 log.go:172] (0xc000b1e0a0) (3) Data frame handling\nI0813 19:07:02.273882    2280 log.go:172] (0xc0009fd6b0) Data frame received for 1\nI0813 19:07:02.273904    2280 log.go:172] (0xc000b885a0) (1) Data frame handling\nI0813 19:07:02.273928    2280 log.go:172] (0xc000b885a0) (1) Data frame sent\nI0813 19:07:02.273955    2280 log.go:172] (0xc0009fd6b0) (0xc000b885a0) Stream removed, broadcasting: 1\nI0813 19:07:02.274025    2280 log.go:172] (0xc0009fd6b0) Go away received\nI0813 19:07:02.274420    2280 log.go:172] (0xc0009fd6b0) (0xc000b885a0) Stream removed, broadcasting: 1\nI0813 19:07:02.274436    2280 log.go:172] (0xc0009fd6b0) (0xc000b1e0a0) Stream removed, broadcasting: 3\nI0813 19:07:02.274443    2280 log.go:172] (0xc0009fd6b0) (0xc000b88640) Stream removed, broadcasting: 5\n"
Aug 13 19:07:02.281: INFO: stdout: ""
Aug 13 19:07:02.281: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-5913 execpod8w7gb -- /bin/sh -x -c nc -zv -t -w 2 10.96.104.166 80'
Aug 13 19:07:02.467: INFO: stderr: "I0813 19:07:02.402008    2300 log.go:172] (0xc000a2ce70) (0xc000a7a500) Create stream\nI0813 19:07:02.402070    2300 log.go:172] (0xc000a2ce70) (0xc000a7a500) Stream added, broadcasting: 1\nI0813 19:07:02.405111    2300 log.go:172] (0xc000a2ce70) Reply frame received for 1\nI0813 19:07:02.405147    2300 log.go:172] (0xc000a2ce70) (0xc000a08000) Create stream\nI0813 19:07:02.405155    2300 log.go:172] (0xc000a2ce70) (0xc000a08000) Stream added, broadcasting: 3\nI0813 19:07:02.406053    2300 log.go:172] (0xc000a2ce70) Reply frame received for 3\nI0813 19:07:02.406089    2300 log.go:172] (0xc000a2ce70) (0xc00067b900) Create stream\nI0813 19:07:02.406097    2300 log.go:172] (0xc000a2ce70) (0xc00067b900) Stream added, broadcasting: 5\nI0813 19:07:02.407011    2300 log.go:172] (0xc000a2ce70) Reply frame received for 5\nI0813 19:07:02.457648    2300 log.go:172] (0xc000a2ce70) Data frame received for 3\nI0813 19:07:02.457678    2300 log.go:172] (0xc000a08000) (3) Data frame handling\nI0813 19:07:02.457789    2300 log.go:172] (0xc000a2ce70) Data frame received for 5\nI0813 19:07:02.457803    2300 log.go:172] (0xc00067b900) (5) Data frame handling\nI0813 19:07:02.457815    2300 log.go:172] (0xc00067b900) (5) Data frame sent\nI0813 19:07:02.457820    2300 log.go:172] (0xc000a2ce70) Data frame received for 5\nI0813 19:07:02.457825    2300 log.go:172] (0xc00067b900) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.104.166 80\nConnection to 10.96.104.166 80 port [tcp/http] succeeded!\nI0813 19:07:02.459329    2300 log.go:172] (0xc000a2ce70) Data frame received for 1\nI0813 19:07:02.459354    2300 log.go:172] (0xc000a7a500) (1) Data frame handling\nI0813 19:07:02.459384    2300 log.go:172] (0xc000a7a500) (1) Data frame sent\nI0813 19:07:02.459402    2300 log.go:172] (0xc000a2ce70) (0xc000a7a500) Stream removed, broadcasting: 1\nI0813 19:07:02.459558    2300 log.go:172] (0xc000a2ce70) Go away received\nI0813 19:07:02.459907    2300 log.go:172] (0xc000a2ce70) (0xc000a7a500) Stream removed, broadcasting: 1\nI0813 19:07:02.459929    2300 log.go:172] (0xc000a2ce70) (0xc000a08000) Stream removed, broadcasting: 3\nI0813 19:07:02.459941    2300 log.go:172] (0xc000a2ce70) (0xc00067b900) Stream removed, broadcasting: 5\n"
Aug 13 19:07:02.467: INFO: stdout: ""
Aug 13 19:07:02.467: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-5913 execpod8w7gb -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.13 30254'
Aug 13 19:07:02.683: INFO: stderr: "I0813 19:07:02.590326    2322 log.go:172] (0xc000a33ad0) (0xc000a46a00) Create stream\nI0813 19:07:02.590377    2322 log.go:172] (0xc000a33ad0) (0xc000a46a00) Stream added, broadcasting: 1\nI0813 19:07:02.595473    2322 log.go:172] (0xc000a33ad0) Reply frame received for 1\nI0813 19:07:02.595540    2322 log.go:172] (0xc000a33ad0) (0xc0005375e0) Create stream\nI0813 19:07:02.595568    2322 log.go:172] (0xc000a33ad0) (0xc0005375e0) Stream added, broadcasting: 3\nI0813 19:07:02.596697    2322 log.go:172] (0xc000a33ad0) Reply frame received for 3\nI0813 19:07:02.596802    2322 log.go:172] (0xc000a33ad0) (0xc0003a4a00) Create stream\nI0813 19:07:02.596814    2322 log.go:172] (0xc000a33ad0) (0xc0003a4a00) Stream added, broadcasting: 5\nI0813 19:07:02.597992    2322 log.go:172] (0xc000a33ad0) Reply frame received for 5\nI0813 19:07:02.670067    2322 log.go:172] (0xc000a33ad0) Data frame received for 3\nI0813 19:07:02.670086    2322 log.go:172] (0xc0005375e0) (3) Data frame handling\nI0813 19:07:02.670339    2322 log.go:172] (0xc000a33ad0) Data frame received for 5\nI0813 19:07:02.670350    2322 log.go:172] (0xc0003a4a00) (5) Data frame handling\nI0813 19:07:02.670361    2322 log.go:172] (0xc0003a4a00) (5) Data frame sent\nI0813 19:07:02.670366    2322 log.go:172] (0xc000a33ad0) Data frame received for 5\nI0813 19:07:02.670372    2322 log.go:172] (0xc0003a4a00) (5) Data frame handling\n+ nc -zv -t -w 2 172.18.0.13 30254\nConnection to 172.18.0.13 30254 port [tcp/30254] succeeded!\nI0813 19:07:02.671710    2322 log.go:172] (0xc000a33ad0) Data frame received for 1\nI0813 19:07:02.671740    2322 log.go:172] (0xc000a46a00) (1) Data frame handling\nI0813 19:07:02.671758    2322 log.go:172] (0xc000a46a00) (1) Data frame sent\nI0813 19:07:02.671781    2322 log.go:172] (0xc000a33ad0) (0xc000a46a00) Stream removed, broadcasting: 1\nI0813 19:07:02.671819    2322 log.go:172] (0xc000a33ad0) Go away received\nI0813 19:07:02.672134    2322 log.go:172] (0xc000a33ad0) (0xc000a46a00) Stream removed, broadcasting: 1\nI0813 19:07:02.672150    2322 log.go:172] (0xc000a33ad0) (0xc0005375e0) Stream removed, broadcasting: 3\nI0813 19:07:02.672158    2322 log.go:172] (0xc000a33ad0) (0xc0003a4a00) Stream removed, broadcasting: 5\n"
Aug 13 19:07:02.683: INFO: stdout: ""
Aug 13 19:07:02.683: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-5913 execpod8w7gb -- /bin/sh -x -c nc -zv -t -w 2 172.18.0.15 30254'
Aug 13 19:07:02.885: INFO: stderr: "I0813 19:07:02.807855    2342 log.go:172] (0xc0003c9c30) (0xc000829540) Create stream\nI0813 19:07:02.807948    2342 log.go:172] (0xc0003c9c30) (0xc000829540) Stream added, broadcasting: 1\nI0813 19:07:02.811627    2342 log.go:172] (0xc0003c9c30) Reply frame received for 1\nI0813 19:07:02.811829    2342 log.go:172] (0xc0003c9c30) (0xc000a72000) Create stream\nI0813 19:07:02.811862    2342 log.go:172] (0xc0003c9c30) (0xc000a72000) Stream added, broadcasting: 3\nI0813 19:07:02.813242    2342 log.go:172] (0xc0003c9c30) Reply frame received for 3\nI0813 19:07:02.813290    2342 log.go:172] (0xc0003c9c30) (0xc0008295e0) Create stream\nI0813 19:07:02.813319    2342 log.go:172] (0xc0003c9c30) (0xc0008295e0) Stream added, broadcasting: 5\nI0813 19:07:02.814326    2342 log.go:172] (0xc0003c9c30) Reply frame received for 5\nI0813 19:07:02.875809    2342 log.go:172] (0xc0003c9c30) Data frame received for 5\nI0813 19:07:02.875840    2342 log.go:172] (0xc0008295e0) (5) Data frame handling\nI0813 19:07:02.875858    2342 log.go:172] (0xc0008295e0) (5) Data frame sent\n+ nc -zv -t -w 2 172.18.0.15 30254\nConnection to 172.18.0.15 30254 port [tcp/30254] succeeded!\nI0813 19:07:02.876099    2342 log.go:172] (0xc0003c9c30) Data frame received for 3\nI0813 19:07:02.876123    2342 log.go:172] (0xc000a72000) (3) Data frame handling\nI0813 19:07:02.876183    2342 log.go:172] (0xc0003c9c30) Data frame received for 5\nI0813 19:07:02.876221    2342 log.go:172] (0xc0008295e0) (5) Data frame handling\nI0813 19:07:02.877383    2342 log.go:172] (0xc0003c9c30) Data frame received for 1\nI0813 19:07:02.877427    2342 log.go:172] (0xc000829540) (1) Data frame handling\nI0813 19:07:02.877453    2342 log.go:172] (0xc000829540) (1) Data frame sent\nI0813 19:07:02.877489    2342 log.go:172] (0xc0003c9c30) (0xc000829540) Stream removed, broadcasting: 1\nI0813 19:07:02.877528    2342 log.go:172] (0xc0003c9c30) Go away received\nI0813 19:07:02.877962    2342 log.go:172] (0xc0003c9c30) (0xc000829540) Stream removed, broadcasting: 1\nI0813 19:07:02.877981    2342 log.go:172] (0xc0003c9c30) (0xc000a72000) Stream removed, broadcasting: 3\nI0813 19:07:02.877990    2342 log.go:172] (0xc0003c9c30) (0xc0008295e0) Stream removed, broadcasting: 5\n"
Aug 13 19:07:02.885: INFO: stdout: ""
Aug 13 19:07:02.885: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:07:02.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5913" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:12.279 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":275,"completed":175,"skipped":2854,"failed":0}
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:07:02.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: getting the auto-created API token
Aug 13 19:07:03.564: INFO: created pod pod-service-account-defaultsa
Aug 13 19:07:03.564: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 13 19:07:03.572: INFO: created pod pod-service-account-mountsa
Aug 13 19:07:03.572: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 13 19:07:03.592: INFO: created pod pod-service-account-nomountsa
Aug 13 19:07:03.592: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 13 19:07:03.636: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 13 19:07:03.636: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 13 19:07:03.683: INFO: created pod pod-service-account-mountsa-mountspec
Aug 13 19:07:03.683: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 13 19:07:03.744: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 13 19:07:03.744: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 13 19:07:03.817: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 13 19:07:03.817: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 13 19:07:03.838: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 13 19:07:03.838: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 13 19:07:03.896: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 13 19:07:03.896: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:07:03.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-357" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":275,"completed":176,"skipped":2858,"failed":0}
SSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:07:04.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1744 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1744;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1744 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1744;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1744.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1744.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1744.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1744.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1744.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1744.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1744.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1744.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1744.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1744.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1744.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1744.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1744.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 107.200.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.200.107_udp@PTR;check="$$(dig +tcp +noall +answer +search 107.200.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.200.107_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1744 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1744;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1744 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1744;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1744.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1744.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1744.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1744.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1744.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1744.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1744.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1744.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1744.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1744.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1744.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1744.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1744.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 107.200.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.200.107_udp@PTR;check="$$(dig +tcp +noall +answer +search 107.200.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.200.107_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 13 19:07:31.362: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.498: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.549: INFO: Unable to read wheezy_udp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.665: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.681: INFO: Unable to read wheezy_udp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.705: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.711: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.714: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.854: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.860: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.862: INFO: Unable to read jessie_udp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.865: INFO: Unable to read jessie_tcp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.869: INFO: Unable to read jessie_udp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.871: INFO: Unable to read jessie_tcp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.874: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.876: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:31.975: INFO: Lookups using dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1744 wheezy_tcp@dns-test-service.dns-1744 wheezy_udp@dns-test-service.dns-1744.svc wheezy_tcp@dns-test-service.dns-1744.svc wheezy_udp@_http._tcp.dns-test-service.dns-1744.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1744.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1744 jessie_tcp@dns-test-service.dns-1744 jessie_udp@dns-test-service.dns-1744.svc jessie_tcp@dns-test-service.dns-1744.svc jessie_udp@_http._tcp.dns-test-service.dns-1744.svc jessie_tcp@_http._tcp.dns-test-service.dns-1744.svc]

Aug 13 19:07:36.981: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:36.984: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:36.988: INFO: Unable to read wheezy_udp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:36.992: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:36.995: INFO: Unable to read wheezy_udp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:36.998: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:37.001: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:37.003: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:37.022: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:37.025: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:37.028: INFO: Unable to read jessie_udp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:37.030: INFO: Unable to read jessie_tcp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:37.033: INFO: Unable to read jessie_udp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:37.036: INFO: Unable to read jessie_tcp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:37.039: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:37.042: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:37.059: INFO: Lookups using dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1744 wheezy_tcp@dns-test-service.dns-1744 wheezy_udp@dns-test-service.dns-1744.svc wheezy_tcp@dns-test-service.dns-1744.svc wheezy_udp@_http._tcp.dns-test-service.dns-1744.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1744.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1744 jessie_tcp@dns-test-service.dns-1744 jessie_udp@dns-test-service.dns-1744.svc jessie_tcp@dns-test-service.dns-1744.svc jessie_udp@_http._tcp.dns-test-service.dns-1744.svc jessie_tcp@_http._tcp.dns-test-service.dns-1744.svc]

Aug 13 19:07:41.980: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:41.983: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:41.986: INFO: Unable to read wheezy_udp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:41.988: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:41.991: INFO: Unable to read wheezy_udp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:41.994: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:41.997: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:41.999: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:42.020: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:42.023: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:42.026: INFO: Unable to read jessie_udp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:42.029: INFO: Unable to read jessie_tcp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:42.032: INFO: Unable to read jessie_udp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:42.036: INFO: Unable to read jessie_tcp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:42.038: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:42.041: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:42.057: INFO: Lookups using dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1744 wheezy_tcp@dns-test-service.dns-1744 wheezy_udp@dns-test-service.dns-1744.svc wheezy_tcp@dns-test-service.dns-1744.svc wheezy_udp@_http._tcp.dns-test-service.dns-1744.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1744.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1744 jessie_tcp@dns-test-service.dns-1744 jessie_udp@dns-test-service.dns-1744.svc jessie_tcp@dns-test-service.dns-1744.svc jessie_udp@_http._tcp.dns-test-service.dns-1744.svc jessie_tcp@_http._tcp.dns-test-service.dns-1744.svc]

Aug 13 19:07:47.107: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.111: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.114: INFO: Unable to read wheezy_udp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.117: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.120: INFO: Unable to read wheezy_udp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.123: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.127: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.130: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.148: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.151: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.154: INFO: Unable to read jessie_udp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.157: INFO: Unable to read jessie_tcp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.159: INFO: Unable to read jessie_udp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.162: INFO: Unable to read jessie_tcp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.169: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.172: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:47.189: INFO: Lookups using dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1744 wheezy_tcp@dns-test-service.dns-1744 wheezy_udp@dns-test-service.dns-1744.svc wheezy_tcp@dns-test-service.dns-1744.svc wheezy_udp@_http._tcp.dns-test-service.dns-1744.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1744.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1744 jessie_tcp@dns-test-service.dns-1744 jessie_udp@dns-test-service.dns-1744.svc jessie_tcp@dns-test-service.dns-1744.svc jessie_udp@_http._tcp.dns-test-service.dns-1744.svc jessie_tcp@_http._tcp.dns-test-service.dns-1744.svc]

Aug 13 19:07:52.079: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.083: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.086: INFO: Unable to read wheezy_udp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.089: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.092: INFO: Unable to read wheezy_udp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.095: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.099: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.102: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.119: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.122: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.124: INFO: Unable to read jessie_udp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.127: INFO: Unable to read jessie_tcp@dns-test-service.dns-1744 from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.129: INFO: Unable to read jessie_udp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.132: INFO: Unable to read jessie_tcp@dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.135: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.137: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1744.svc from pod dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4: the server could not find the requested resource (get pods dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4)
Aug 13 19:07:52.154: INFO: Lookups using dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1744 wheezy_tcp@dns-test-service.dns-1744 wheezy_udp@dns-test-service.dns-1744.svc wheezy_tcp@dns-test-service.dns-1744.svc wheezy_udp@_http._tcp.dns-test-service.dns-1744.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1744.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1744 jessie_tcp@dns-test-service.dns-1744 jessie_udp@dns-test-service.dns-1744.svc jessie_tcp@dns-test-service.dns-1744.svc jessie_udp@_http._tcp.dns-test-service.dns-1744.svc jessie_tcp@_http._tcp.dns-test-service.dns-1744.svc]

Aug 13 19:07:57.220: INFO: DNS probes using dns-1744/dns-test-d6c11ead-a19d-4faa-ae64-8972508026b4 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:07:58.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1744" for this suite.

• [SLOW TEST:54.096 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":275,"completed":177,"skipped":2866,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:07:58.150: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 13 19:08:00.900: INFO: Pod name wrapped-volume-race-45c896c2-5c5f-4538-9921-42caee603e72: Found 0 pods out of 5
Aug 13 19:08:05.956: INFO: Pod name wrapped-volume-race-45c896c2-5c5f-4538-9921-42caee603e72: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-45c896c2-5c5f-4538-9921-42caee603e72 in namespace emptydir-wrapper-5435, will wait for the garbage collector to delete the pods
Aug 13 19:08:24.063: INFO: Deleting ReplicationController wrapped-volume-race-45c896c2-5c5f-4538-9921-42caee603e72 took: 8.132002ms
Aug 13 19:08:24.463: INFO: Terminating ReplicationController wrapped-volume-race-45c896c2-5c5f-4538-9921-42caee603e72 pods took: 400.263814ms
STEP: Creating RC which spawns configmap-volume pods
Aug 13 19:08:45.770: INFO: Pod name wrapped-volume-race-1d59f688-8de5-4530-a1bc-d968b77f01c7: Found 0 pods out of 5
Aug 13 19:08:51.044: INFO: Pod name wrapped-volume-race-1d59f688-8de5-4530-a1bc-d968b77f01c7: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-1d59f688-8de5-4530-a1bc-d968b77f01c7 in namespace emptydir-wrapper-5435, will wait for the garbage collector to delete the pods
Aug 13 19:09:07.187: INFO: Deleting ReplicationController wrapped-volume-race-1d59f688-8de5-4530-a1bc-d968b77f01c7 took: 6.844548ms
Aug 13 19:09:07.488: INFO: Terminating ReplicationController wrapped-volume-race-1d59f688-8de5-4530-a1bc-d968b77f01c7 pods took: 300.252839ms
STEP: Creating RC which spawns configmap-volume pods
Aug 13 19:09:24.635: INFO: Pod name wrapped-volume-race-64dca928-b636-4ccc-935a-59761c770951: Found 0 pods out of 5
Aug 13 19:09:30.369: INFO: Pod name wrapped-volume-race-64dca928-b636-4ccc-935a-59761c770951: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-64dca928-b636-4ccc-935a-59761c770951 in namespace emptydir-wrapper-5435, will wait for the garbage collector to delete the pods
Aug 13 19:09:52.807: INFO: Deleting ReplicationController wrapped-volume-race-64dca928-b636-4ccc-935a-59761c770951 took: 126.128059ms
Aug 13 19:09:53.207: INFO: Terminating ReplicationController wrapped-volume-race-64dca928-b636-4ccc-935a-59761c770951 pods took: 400.312968ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:10:14.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-5435" for this suite.

• [SLOW TEST:136.212 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":275,"completed":178,"skipped":2911,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:10:14.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 13 19:10:18.492: INFO: &Pod{ObjectMeta:{send-events-c40a2c87-b308-40fe-adce-25e77ed3b234  events-9643 /api/v1/namespaces/events-9643/pods/send-events-c40a2c87-b308-40fe-adce-25e77ed3b234 4d2a0102-ef96-4ff2-88c0-2b6fbd5abcfa 9289010 0 2020-08-13 19:10:14 +0000 UTC   map[name:foo time:445697912] map[] [] []  [{e2e.test Update v1 2020-08-13 19:10:14 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 116 105 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 112 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 114 103 115 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 114 116 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 99 111 110 116 97 105 110 101 114 80 111 114 116 92 34 58 56 48 44 92 34 112 114 111 116 111 99 111 108 92 34 58 92 34 84 67 80 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 99 111 110 116 97 105 110 101 114 80 111 114 116 34 58 123 125 44 34 102 58 112 114 111 116 111 99 111 108 34 58 123 125 125 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 19:10:18 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 48 49 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-92x9s,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-92x9s,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-92x9s,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 19:10:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 19:10:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 19:10:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 19:10:14 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.201,StartTime:2020-08-13 19:10:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-13 19:10:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://7f012c99fea1ff0a2a569359cd1fbdb06534327791059b7d6735e9e87261a46e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.201,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Aug 13 19:10:20.499: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 13 19:10:22.704: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:10:23.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-9643" for this suite.

• [SLOW TEST:9.287 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":275,"completed":179,"skipped":2919,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:10:23.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:10:24.404: INFO: Create a RollingUpdate DaemonSet
Aug 13 19:10:24.446: INFO: Check that daemon pods launch on every node of the cluster
Aug 13 19:10:24.573: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:10:24.605: INFO: Number of nodes with available pods: 0
Aug 13 19:10:24.605: INFO: Node kali-worker is running more than one daemon pod
Aug 13 19:10:25.853: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:10:25.938: INFO: Number of nodes with available pods: 0
Aug 13 19:10:25.938: INFO: Node kali-worker is running more than one daemon pod
Aug 13 19:10:26.639: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:10:26.674: INFO: Number of nodes with available pods: 0
Aug 13 19:10:26.674: INFO: Node kali-worker is running more than one daemon pod
Aug 13 19:10:28.280: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:10:28.740: INFO: Number of nodes with available pods: 0
Aug 13 19:10:28.740: INFO: Node kali-worker is running more than one daemon pod
Aug 13 19:10:29.771: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:10:29.830: INFO: Number of nodes with available pods: 0
Aug 13 19:10:29.830: INFO: Node kali-worker is running more than one daemon pod
Aug 13 19:10:30.610: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:10:30.897: INFO: Number of nodes with available pods: 0
Aug 13 19:10:30.897: INFO: Node kali-worker is running more than one daemon pod
Aug 13 19:10:31.637: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:10:31.781: INFO: Number of nodes with available pods: 1
Aug 13 19:10:31.781: INFO: Node kali-worker is running more than one daemon pod
Aug 13 19:10:32.633: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:10:32.669: INFO: Number of nodes with available pods: 2
Aug 13 19:10:32.669: INFO: Number of running nodes: 2, number of available pods: 2
Aug 13 19:10:32.669: INFO: Update the DaemonSet to trigger a rollout
Aug 13 19:10:32.871: INFO: Updating DaemonSet daemon-set
Aug 13 19:10:44.012: INFO: Roll back the DaemonSet before rollout is complete
Aug 13 19:10:44.020: INFO: Updating DaemonSet daemon-set
Aug 13 19:10:44.020: INFO: Make sure DaemonSet rollback is complete
Aug 13 19:10:44.052: INFO: Wrong image for pod: daemon-set-lgmgg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 13 19:10:44.052: INFO: Pod daemon-set-lgmgg is not available
Aug 13 19:10:44.087: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:10:45.091: INFO: Wrong image for pod: daemon-set-lgmgg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 13 19:10:45.092: INFO: Pod daemon-set-lgmgg is not available
Aug 13 19:10:45.096: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:10:46.127: INFO: Wrong image for pod: daemon-set-lgmgg. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Aug 13 19:10:46.127: INFO: Pod daemon-set-lgmgg is not available
Aug 13 19:10:46.298: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:10:47.116: INFO: Pod daemon-set-7lfqr is not available
Aug 13 19:10:47.120: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6946, will wait for the garbage collector to delete the pods
Aug 13 19:10:47.196: INFO: Deleting DaemonSet.extensions daemon-set took: 6.441685ms
Aug 13 19:10:48.296: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.100192891s
Aug 13 19:10:53.999: INFO: Number of nodes with available pods: 0
Aug 13 19:10:53.999: INFO: Number of running nodes: 0, number of available pods: 0
Aug 13 19:10:54.001: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6946/daemonsets","resourceVersion":"9289370"},"items":null}

Aug 13 19:10:54.003: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6946/pods","resourceVersion":"9289370"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:10:54.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6946" for this suite.

• [SLOW TEST:30.365 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":275,"completed":180,"skipped":2959,"failed":0}
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:10:54.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-5028
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 13 19:10:54.634: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 13 19:10:55.465: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 13 19:10:57.973: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 13 19:10:59.584: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 13 19:11:01.468: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:11:03.469: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:11:05.469: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:11:07.469: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:11:10.248: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:11:11.812: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:11:13.601: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:11:15.655: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:11:17.538: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 13 19:11:17.543: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 13 19:11:30.145: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.205:8080/dial?request=hostname&protocol=udp&host=10.244.2.204&port=8081&tries=1'] Namespace:pod-network-test-5028 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 19:11:30.145: INFO: >>> kubeConfig: /root/.kube/config
I0813 19:11:30.173578       7 log.go:172] (0xc004ac2370) (0xc0014f4dc0) Create stream
I0813 19:11:30.173605       7 log.go:172] (0xc004ac2370) (0xc0014f4dc0) Stream added, broadcasting: 1
I0813 19:11:30.175287       7 log.go:172] (0xc004ac2370) Reply frame received for 1
I0813 19:11:30.175332       7 log.go:172] (0xc004ac2370) (0xc0014f4e60) Create stream
I0813 19:11:30.175349       7 log.go:172] (0xc004ac2370) (0xc0014f4e60) Stream added, broadcasting: 3
I0813 19:11:30.176321       7 log.go:172] (0xc004ac2370) Reply frame received for 3
I0813 19:11:30.176351       7 log.go:172] (0xc004ac2370) (0xc000b50280) Create stream
I0813 19:11:30.176357       7 log.go:172] (0xc004ac2370) (0xc000b50280) Stream added, broadcasting: 5
I0813 19:11:30.177200       7 log.go:172] (0xc004ac2370) Reply frame received for 5
I0813 19:11:30.241314       7 log.go:172] (0xc004ac2370) Data frame received for 3
I0813 19:11:30.241394       7 log.go:172] (0xc0014f4e60) (3) Data frame handling
I0813 19:11:30.241424       7 log.go:172] (0xc0014f4e60) (3) Data frame sent
I0813 19:11:30.241650       7 log.go:172] (0xc004ac2370) Data frame received for 3
I0813 19:11:30.241684       7 log.go:172] (0xc0014f4e60) (3) Data frame handling
I0813 19:11:30.241703       7 log.go:172] (0xc004ac2370) Data frame received for 5
I0813 19:11:30.241712       7 log.go:172] (0xc000b50280) (5) Data frame handling
I0813 19:11:30.243237       7 log.go:172] (0xc004ac2370) Data frame received for 1
I0813 19:11:30.243276       7 log.go:172] (0xc0014f4dc0) (1) Data frame handling
I0813 19:11:30.243301       7 log.go:172] (0xc0014f4dc0) (1) Data frame sent
I0813 19:11:30.243318       7 log.go:172] (0xc004ac2370) (0xc0014f4dc0) Stream removed, broadcasting: 1
I0813 19:11:30.243360       7 log.go:172] (0xc004ac2370) Go away received
I0813 19:11:30.243479       7 log.go:172] (0xc004ac2370) (0xc0014f4dc0) Stream removed, broadcasting: 1
I0813 19:11:30.243511       7 log.go:172] (0xc004ac2370) (0xc0014f4e60) Stream removed, broadcasting: 3
I0813 19:11:30.243527       7 log.go:172] (0xc004ac2370) (0xc000b50280) Stream removed, broadcasting: 5
Aug 13 19:11:30.243: INFO: Waiting for responses: map[]
Aug 13 19:11:30.278: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.205:8080/dial?request=hostname&protocol=udp&host=10.244.1.109&port=8081&tries=1'] Namespace:pod-network-test-5028 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 19:11:30.278: INFO: >>> kubeConfig: /root/.kube/config
I0813 19:11:30.303227       7 log.go:172] (0xc00282c9a0) (0xc00126c5a0) Create stream
I0813 19:11:30.303252       7 log.go:172] (0xc00282c9a0) (0xc00126c5a0) Stream added, broadcasting: 1
I0813 19:11:30.305094       7 log.go:172] (0xc00282c9a0) Reply frame received for 1
I0813 19:11:30.305118       7 log.go:172] (0xc00282c9a0) (0xc000b503c0) Create stream
I0813 19:11:30.305126       7 log.go:172] (0xc00282c9a0) (0xc000b503c0) Stream added, broadcasting: 3
I0813 19:11:30.306030       7 log.go:172] (0xc00282c9a0) Reply frame received for 3
I0813 19:11:30.306065       7 log.go:172] (0xc00282c9a0) (0xc001bb8000) Create stream
I0813 19:11:30.306102       7 log.go:172] (0xc00282c9a0) (0xc001bb8000) Stream added, broadcasting: 5
I0813 19:11:30.306931       7 log.go:172] (0xc00282c9a0) Reply frame received for 5
I0813 19:11:30.370567       7 log.go:172] (0xc00282c9a0) Data frame received for 3
I0813 19:11:30.370598       7 log.go:172] (0xc000b503c0) (3) Data frame handling
I0813 19:11:30.370618       7 log.go:172] (0xc000b503c0) (3) Data frame sent
I0813 19:11:30.371052       7 log.go:172] (0xc00282c9a0) Data frame received for 5
I0813 19:11:30.371069       7 log.go:172] (0xc001bb8000) (5) Data frame handling
I0813 19:11:30.371237       7 log.go:172] (0xc00282c9a0) Data frame received for 3
I0813 19:11:30.371252       7 log.go:172] (0xc000b503c0) (3) Data frame handling
I0813 19:11:30.372464       7 log.go:172] (0xc00282c9a0) Data frame received for 1
I0813 19:11:30.372496       7 log.go:172] (0xc00126c5a0) (1) Data frame handling
I0813 19:11:30.372521       7 log.go:172] (0xc00126c5a0) (1) Data frame sent
I0813 19:11:30.372534       7 log.go:172] (0xc00282c9a0) (0xc00126c5a0) Stream removed, broadcasting: 1
I0813 19:11:30.372583       7 log.go:172] (0xc00282c9a0) Go away received
I0813 19:11:30.372612       7 log.go:172] (0xc00282c9a0) (0xc00126c5a0) Stream removed, broadcasting: 1
I0813 19:11:30.372633       7 log.go:172] (0xc00282c9a0) (0xc000b503c0) Stream removed, broadcasting: 3
I0813 19:11:30.372655       7 log.go:172] (0xc00282c9a0) (0xc001bb8000) Stream removed, broadcasting: 5
Aug 13 19:11:30.372: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:11:30.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5028" for this suite.

• [SLOW TEST:36.363 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":275,"completed":181,"skipped":2959,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:11:30.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-b5g7
STEP: Creating a pod to test atomic-volume-subpath
Aug 13 19:11:30.536: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-b5g7" in namespace "subpath-2234" to be "Succeeded or Failed"
Aug 13 19:11:30.557: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Pending", Reason="", readiness=false. Elapsed: 20.449638ms
Aug 13 19:11:32.646: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109957836s
Aug 13 19:11:34.650: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.114055469s
Aug 13 19:11:36.679: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.143049898s
Aug 13 19:11:38.733: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Running", Reason="", readiness=true. Elapsed: 8.19667337s
Aug 13 19:11:40.735: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Running", Reason="", readiness=true. Elapsed: 10.199269282s
Aug 13 19:11:42.889: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Running", Reason="", readiness=true. Elapsed: 12.353194927s
Aug 13 19:11:44.895: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Running", Reason="", readiness=true. Elapsed: 14.358522282s
Aug 13 19:11:46.899: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Running", Reason="", readiness=true. Elapsed: 16.362417931s
Aug 13 19:11:48.903: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Running", Reason="", readiness=true. Elapsed: 18.366754548s
Aug 13 19:11:50.907: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Running", Reason="", readiness=true. Elapsed: 20.370931413s
Aug 13 19:11:52.911: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Running", Reason="", readiness=true. Elapsed: 22.375286063s
Aug 13 19:11:54.937: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Running", Reason="", readiness=true. Elapsed: 24.400641602s
Aug 13 19:11:56.940: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Running", Reason="", readiness=true. Elapsed: 26.403377184s
Aug 13 19:11:58.944: INFO: Pod "pod-subpath-test-configmap-b5g7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.407856786s
STEP: Saw pod success
Aug 13 19:11:58.944: INFO: Pod "pod-subpath-test-configmap-b5g7" satisfied condition "Succeeded or Failed"
Aug 13 19:11:58.947: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-b5g7 container test-container-subpath-configmap-b5g7: 
STEP: delete the pod
Aug 13 19:11:58.983: INFO: Waiting for pod pod-subpath-test-configmap-b5g7 to disappear
Aug 13 19:11:58.988: INFO: Pod pod-subpath-test-configmap-b5g7 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-b5g7
Aug 13 19:11:58.988: INFO: Deleting pod "pod-subpath-test-configmap-b5g7" in namespace "subpath-2234"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:11:58.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2234" for this suite.

• [SLOW TEST:28.618 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":275,"completed":182,"skipped":2969,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:11:58.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 13 19:12:06.147: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:12:06.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7206" for this suite.

• [SLOW TEST:7.298 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":275,"completed":183,"skipped":2989,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:12:06.297: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:12:12.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9695" for this suite.

• [SLOW TEST:6.790 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:41
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":275,"completed":184,"skipped":3007,"failed":0}
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:12:13.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1454
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 13 19:12:13.975: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-932'
Aug 13 19:12:14.368: INFO: stderr: ""
Aug 13 19:12:14.368: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Aug 13 19:12:19.419: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-932 -o json'
Aug 13 19:12:19.535: INFO: stderr: ""
Aug 13 19:12:19.535: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-13T19:12:14Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"managedFields\": [\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:metadata\": {\n                        \"f:labels\": {\n                            \".\": {},\n                            \"f:run\": {}\n                        }\n                    },\n                    \"f:spec\": {\n                        \"f:containers\": {\n                            \"k:{\\\"name\\\":\\\"e2e-test-httpd-pod\\\"}\": {\n                                \".\": {},\n                                \"f:image\": {},\n                                \"f:imagePullPolicy\": {},\n                                \"f:name\": {},\n                                \"f:resources\": {},\n                                \"f:terminationMessagePath\": {},\n                                \"f:terminationMessagePolicy\": {}\n                            }\n                        },\n                        \"f:dnsPolicy\": {},\n                        \"f:enableServiceLinks\": {},\n                        \"f:restartPolicy\": {},\n                        \"f:schedulerName\": {},\n                        \"f:securityContext\": {},\n                        \"f:terminationGracePeriodSeconds\": {}\n                    }\n                },\n                \"manager\": \"kubectl\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-08-13T19:12:14Z\"\n            },\n            {\n                \"apiVersion\": \"v1\",\n                \"fieldsType\": \"FieldsV1\",\n                \"fieldsV1\": {\n                    \"f:status\": {\n                        \"f:conditions\": {\n                            \"k:{\\\"type\\\":\\\"ContainersReady\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Initialized\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            },\n                            \"k:{\\\"type\\\":\\\"Ready\\\"}\": {\n                                \".\": {},\n                                \"f:lastProbeTime\": {},\n                                \"f:lastTransitionTime\": {},\n                                \"f:status\": {},\n                                \"f:type\": {}\n                            }\n                        },\n                        \"f:containerStatuses\": {},\n                        \"f:hostIP\": {},\n                        \"f:phase\": {},\n                        \"f:podIP\": {},\n                        \"f:podIPs\": {\n                            \".\": {},\n                            \"k:{\\\"ip\\\":\\\"10.244.1.110\\\"}\": {\n                                \".\": {},\n                                \"f:ip\": {}\n                            }\n                        },\n                        \"f:startTime\": {}\n                    }\n                },\n                \"manager\": \"kubelet\",\n                \"operation\": \"Update\",\n                \"time\": \"2020-08-13T19:12:19Z\"\n            }\n        ],\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-932\",\n        \"resourceVersion\": \"9289791\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-932/pods/e2e-test-httpd-pod\",\n        \"uid\": \"72ac180f-e4f6-4f78-ae01-2bed0b51cf0e\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-vbmsn\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"kali-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-vbmsn\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-vbmsn\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-13T19:12:14Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-13T19:12:19Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-13T19:12:19Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-13T19:12:14Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://8acdb81a4c493fbc7c0c108902b40806bf6026d5054936cc8cda219a4bd20b7e\",\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-13T19:12:18Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.15\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.1.110\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.244.1.110\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-13T19:12:14Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 13 19:12:19.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-932'
Aug 13 19:12:19.925: INFO: stderr: ""
Aug 13 19:12:19.926: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1459
Aug 13 19:12:19.929: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-932'
Aug 13 19:12:33.664: INFO: stderr: ""
Aug 13 19:12:33.665: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:12:33.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-932" for this suite.

• [SLOW TEST:20.753 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1450
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":275,"completed":185,"skipped":3015,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:12:33.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-6962868e-baba-46fc-9202-a586fb7413d9
STEP: Creating a pod to test consume configMaps
Aug 13 19:12:34.506: INFO: Waiting up to 5m0s for pod "pod-configmaps-fe5df498-e64a-47e9-8e43-5df93720bf53" in namespace "configmap-4188" to be "Succeeded or Failed"
Aug 13 19:12:34.510: INFO: Pod "pod-configmaps-fe5df498-e64a-47e9-8e43-5df93720bf53": Phase="Pending", Reason="", readiness=false. Elapsed: 3.621209ms
Aug 13 19:12:36.514: INFO: Pod "pod-configmaps-fe5df498-e64a-47e9-8e43-5df93720bf53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007988438s
Aug 13 19:12:38.517: INFO: Pod "pod-configmaps-fe5df498-e64a-47e9-8e43-5df93720bf53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011188457s
STEP: Saw pod success
Aug 13 19:12:38.517: INFO: Pod "pod-configmaps-fe5df498-e64a-47e9-8e43-5df93720bf53" satisfied condition "Succeeded or Failed"
Aug 13 19:12:38.520: INFO: Trying to get logs from node kali-worker pod pod-configmaps-fe5df498-e64a-47e9-8e43-5df93720bf53 container configmap-volume-test: 
STEP: delete the pod
Aug 13 19:12:38.673: INFO: Waiting for pod pod-configmaps-fe5df498-e64a-47e9-8e43-5df93720bf53 to disappear
Aug 13 19:12:38.715: INFO: Pod pod-configmaps-fe5df498-e64a-47e9-8e43-5df93720bf53 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:12:38.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4188" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":186,"skipped":3063,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:12:38.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod test-webserver-3eeaed30-2145-4a71-b7c4-1c72e31e5159 in namespace container-probe-8308
Aug 13 19:12:43.096: INFO: Started pod test-webserver-3eeaed30-2145-4a71-b7c4-1c72e31e5159 in namespace container-probe-8308
STEP: checking the pod's current state and verifying that restartCount is present
Aug 13 19:12:43.099: INFO: Initial restart count of pod test-webserver-3eeaed30-2145-4a71-b7c4-1c72e31e5159 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:16:44.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8308" for this suite.

• [SLOW TEST:246.964 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":275,"completed":187,"skipped":3064,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:16:45.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating replication controller my-hostname-basic-b1482773-ac8c-4a46-8fd2-9cababbf99b6
Aug 13 19:16:46.963: INFO: Pod name my-hostname-basic-b1482773-ac8c-4a46-8fd2-9cababbf99b6: Found 0 pods out of 1
Aug 13 19:16:51.968: INFO: Pod name my-hostname-basic-b1482773-ac8c-4a46-8fd2-9cababbf99b6: Found 1 pods out of 1
Aug 13 19:16:51.968: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-b1482773-ac8c-4a46-8fd2-9cababbf99b6" are running
Aug 13 19:16:51.971: INFO: Pod "my-hostname-basic-b1482773-ac8c-4a46-8fd2-9cababbf99b6-dqxqr" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-13 19:16:47 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-13 19:16:50 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-13 19:16:50 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-13 19:16:46 +0000 UTC Reason: Message:}])
Aug 13 19:16:51.971: INFO: Trying to dial the pod
Aug 13 19:16:56.982: INFO: Controller my-hostname-basic-b1482773-ac8c-4a46-8fd2-9cababbf99b6: Got expected result from replica 1 [my-hostname-basic-b1482773-ac8c-4a46-8fd2-9cababbf99b6-dqxqr]: "my-hostname-basic-b1482773-ac8c-4a46-8fd2-9cababbf99b6-dqxqr", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:16:56.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5667" for this suite.

• [SLOW TEST:11.301 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":275,"completed":188,"skipped":3097,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:16:56.991: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-5078
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating statefulset ss in namespace statefulset-5078
Aug 13 19:16:57.154: INFO: Found 0 stateful pods, waiting for 1
Aug 13 19:17:07.159: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 13 19:17:07.192: INFO: Deleting all statefulset in ns statefulset-5078
Aug 13 19:17:07.237: INFO: Scaling statefulset ss to 0
Aug 13 19:17:17.654: INFO: Waiting for statefulset status.replicas updated to 0
Aug 13 19:17:17.658: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:17:17.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5078" for this suite.

• [SLOW TEST:20.721 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":275,"completed":189,"skipped":3109,"failed":0}
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:17:17.712: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:17:17.787: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:17:18.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5276" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":275,"completed":190,"skipped":3109,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:17:18.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 13 19:17:18.526: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-5974 /api/v1/namespaces/watch-5974/configmaps/e2e-watch-test-resource-version ed181ba6-2d8c-4402-86ec-2cfdfdd042d4 9290821 0 2020-08-13 19:17:18 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-08-13 19:17:18 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Aug 13 19:17:18.526: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-5974 /api/v1/namespaces/watch-5974/configmaps/e2e-watch-test-resource-version ed181ba6-2d8c-4402-86ec-2cfdfdd042d4 9290822 0 2020-08-13 19:17:18 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  [{e2e.test Update v1 2020-08-13 19:17:18 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 100 97 116 97 34 58 123 34 46 34 58 123 125 44 34 102 58 109 117 116 97 116 105 111 110 34 58 123 125 125 44 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 119 97 116 99 104 45 116 104 105 115 45 99 111 110 102 105 103 109 97 112 34 58 123 125 125 125 125],}}]},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:17:18.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5974" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":275,"completed":191,"skipped":3119,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:17:18.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Aug 13 19:17:18.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Aug 13 19:17:29.261: INFO: >>> kubeConfig: /root/.kube/config
Aug 13 19:17:32.892: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:17:44.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1016" for this suite.

• [SLOW TEST:26.217 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":275,"completed":192,"skipped":3145,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:17:44.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap configmap-2075/configmap-test-db21f491-9841-48b1-ae05-f6699337d63e
STEP: Creating a pod to test consume configMaps
Aug 13 19:17:44.847: INFO: Waiting up to 5m0s for pod "pod-configmaps-f6d04bdc-aea0-4600-a3a2-a241881c7688" in namespace "configmap-2075" to be "Succeeded or Failed"
Aug 13 19:17:44.867: INFO: Pod "pod-configmaps-f6d04bdc-aea0-4600-a3a2-a241881c7688": Phase="Pending", Reason="", readiness=false. Elapsed: 20.34183ms
Aug 13 19:17:46.970: INFO: Pod "pod-configmaps-f6d04bdc-aea0-4600-a3a2-a241881c7688": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122519862s
Aug 13 19:17:48.974: INFO: Pod "pod-configmaps-f6d04bdc-aea0-4600-a3a2-a241881c7688": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126773401s
Aug 13 19:17:50.978: INFO: Pod "pod-configmaps-f6d04bdc-aea0-4600-a3a2-a241881c7688": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.131411826s
STEP: Saw pod success
Aug 13 19:17:50.979: INFO: Pod "pod-configmaps-f6d04bdc-aea0-4600-a3a2-a241881c7688" satisfied condition "Succeeded or Failed"
Aug 13 19:17:50.981: INFO: Trying to get logs from node kali-worker pod pod-configmaps-f6d04bdc-aea0-4600-a3a2-a241881c7688 container env-test: 
STEP: delete the pod
Aug 13 19:17:51.029: INFO: Waiting for pod pod-configmaps-f6d04bdc-aea0-4600-a3a2-a241881c7688 to disappear
Aug 13 19:17:51.047: INFO: Pod pod-configmaps-f6d04bdc-aea0-4600-a3a2-a241881c7688 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:17:51.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2075" for this suite.

• [SLOW TEST:6.328 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:34
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":275,"completed":193,"skipped":3175,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:17:51.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 13 19:17:51.969: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 13 19:17:53.980: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943072, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943072, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943072, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943071, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 13 19:17:57.024: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:17:57.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1282" for this suite.
STEP: Destroying namespace "webhook-1282-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:6.255 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":275,"completed":194,"skipped":3190,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:17:57.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:17:57.447: INFO: (0) /api/v1/nodes/kali-worker2/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:18:57.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2447" for this suite.

• [SLOW TEST:60.241 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":275,"completed":196,"skipped":3238,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:18:57.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:271
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a replication controller
Aug 13 19:18:57.954: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2576'
Aug 13 19:19:02.053: INFO: stderr: ""
Aug 13 19:19:02.053: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 13 19:19:02.053: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2576'
Aug 13 19:19:02.228: INFO: stderr: ""
Aug 13 19:19:02.228: INFO: stdout: "update-demo-nautilus-56p8z update-demo-nautilus-zsqcr "
Aug 13 19:19:02.228: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-56p8z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2576'
Aug 13 19:19:02.340: INFO: stderr: ""
Aug 13 19:19:02.340: INFO: stdout: ""
Aug 13 19:19:02.340: INFO: update-demo-nautilus-56p8z is created but not running
Aug 13 19:19:07.340: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2576'
Aug 13 19:19:07.689: INFO: stderr: ""
Aug 13 19:19:07.690: INFO: stdout: "update-demo-nautilus-56p8z update-demo-nautilus-zsqcr "
Aug 13 19:19:07.690: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-56p8z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2576'
Aug 13 19:19:07.903: INFO: stderr: ""
Aug 13 19:19:07.903: INFO: stdout: "true"
Aug 13 19:19:07.903: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-56p8z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2576'
Aug 13 19:19:08.008: INFO: stderr: ""
Aug 13 19:19:08.008: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 13 19:19:08.008: INFO: validating pod update-demo-nautilus-56p8z
Aug 13 19:19:08.012: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 13 19:19:08.013: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 13 19:19:08.013: INFO: update-demo-nautilus-56p8z is verified up and running
Aug 13 19:19:08.013: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zsqcr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2576'
Aug 13 19:19:08.112: INFO: stderr: ""
Aug 13 19:19:08.112: INFO: stdout: "true"
Aug 13 19:19:08.112: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zsqcr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2576'
Aug 13 19:19:08.211: INFO: stderr: ""
Aug 13 19:19:08.212: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 13 19:19:08.212: INFO: validating pod update-demo-nautilus-zsqcr
Aug 13 19:19:08.215: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 13 19:19:08.215: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 13 19:19:08.215: INFO: update-demo-nautilus-zsqcr is verified up and running
STEP: scaling down the replication controller
Aug 13 19:19:08.218: INFO: scanned /root for discovery docs: 
Aug 13 19:19:08.218: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2576'
Aug 13 19:19:09.428: INFO: stderr: ""
Aug 13 19:19:09.429: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 13 19:19:09.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2576'
Aug 13 19:19:09.532: INFO: stderr: ""
Aug 13 19:19:09.532: INFO: stdout: "update-demo-nautilus-56p8z update-demo-nautilus-zsqcr "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 13 19:19:14.532: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2576'
Aug 13 19:19:14.639: INFO: stderr: ""
Aug 13 19:19:14.639: INFO: stdout: "update-demo-nautilus-56p8z update-demo-nautilus-zsqcr "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 13 19:19:19.639: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2576'
Aug 13 19:19:19.767: INFO: stderr: ""
Aug 13 19:19:19.767: INFO: stdout: "update-demo-nautilus-56p8z update-demo-nautilus-zsqcr "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 13 19:19:24.767: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2576'
Aug 13 19:19:24.863: INFO: stderr: ""
Aug 13 19:19:24.863: INFO: stdout: "update-demo-nautilus-zsqcr "
Aug 13 19:19:24.863: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zsqcr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2576'
Aug 13 19:19:24.949: INFO: stderr: ""
Aug 13 19:19:24.949: INFO: stdout: "true"
Aug 13 19:19:24.949: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zsqcr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2576'
Aug 13 19:19:25.040: INFO: stderr: ""
Aug 13 19:19:25.040: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 13 19:19:25.040: INFO: validating pod update-demo-nautilus-zsqcr
Aug 13 19:19:25.043: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 13 19:19:25.043: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 13 19:19:25.043: INFO: update-demo-nautilus-zsqcr is verified up and running
STEP: scaling up the replication controller
Aug 13 19:19:25.045: INFO: scanned /root for discovery docs: 
Aug 13 19:19:25.046: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2576'
Aug 13 19:19:26.210: INFO: stderr: ""
Aug 13 19:19:26.210: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 13 19:19:26.210: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2576'
Aug 13 19:19:26.308: INFO: stderr: ""
Aug 13 19:19:26.308: INFO: stdout: "update-demo-nautilus-jrdzr update-demo-nautilus-zsqcr "
Aug 13 19:19:26.308: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jrdzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2576'
Aug 13 19:19:26.428: INFO: stderr: ""
Aug 13 19:19:26.428: INFO: stdout: ""
Aug 13 19:19:26.428: INFO: update-demo-nautilus-jrdzr is created but not running
Aug 13 19:19:31.428: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2576'
Aug 13 19:19:31.525: INFO: stderr: ""
Aug 13 19:19:31.525: INFO: stdout: "update-demo-nautilus-jrdzr update-demo-nautilus-zsqcr "
Aug 13 19:19:31.525: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jrdzr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2576'
Aug 13 19:19:31.827: INFO: stderr: ""
Aug 13 19:19:31.827: INFO: stdout: "true"
Aug 13 19:19:31.827: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jrdzr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2576'
Aug 13 19:19:31.978: INFO: stderr: ""
Aug 13 19:19:31.978: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 13 19:19:31.978: INFO: validating pod update-demo-nautilus-jrdzr
Aug 13 19:19:32.012: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 13 19:19:32.012: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 13 19:19:32.012: INFO: update-demo-nautilus-jrdzr is verified up and running
Aug 13 19:19:32.012: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zsqcr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2576'
Aug 13 19:19:32.259: INFO: stderr: ""
Aug 13 19:19:32.260: INFO: stdout: "true"
Aug 13 19:19:32.260: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods update-demo-nautilus-zsqcr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2576'
Aug 13 19:19:32.422: INFO: stderr: ""
Aug 13 19:19:32.422: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 13 19:19:32.422: INFO: validating pod update-demo-nautilus-zsqcr
Aug 13 19:19:32.426: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 13 19:19:32.426: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 13 19:19:32.426: INFO: update-demo-nautilus-zsqcr is verified up and running
STEP: using delete to clean up resources
Aug 13 19:19:32.426: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2576'
Aug 13 19:19:32.535: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 13 19:19:32.535: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 13 19:19:32.535: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2576'
Aug 13 19:19:32.667: INFO: stderr: "No resources found in kubectl-2576 namespace.\n"
Aug 13 19:19:32.667: INFO: stdout: ""
Aug 13 19:19:32.667: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2576 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 13 19:19:32.772: INFO: stderr: ""
Aug 13 19:19:32.773: INFO: stdout: "update-demo-nautilus-jrdzr\nupdate-demo-nautilus-zsqcr\n"
Aug 13 19:19:33.273: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2576'
Aug 13 19:19:33.373: INFO: stderr: "No resources found in kubectl-2576 namespace.\n"
Aug 13 19:19:33.373: INFO: stdout: ""
Aug 13 19:19:33.373: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2576 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 13 19:19:33.471: INFO: stderr: ""
Aug 13 19:19:33.471: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:19:33.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2576" for this suite.

• [SLOW TEST:35.660 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:269
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":275,"completed":197,"skipped":3240,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:19:33.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:19:58.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5259" for this suite.

• [SLOW TEST:24.543 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":275,"completed":198,"skipped":3285,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:19:58.022: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-upd-546e0625-b104-4d7c-ace5-d869f8d03091
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:20:04.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9198" for this suite.

• [SLOW TEST:6.943 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:36
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":199,"skipped":3299,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:20:04.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name s-test-opt-del-991497d0-2d62-41d9-8f37-b9b93e2a6ad8
STEP: Creating secret with name s-test-opt-upd-d12b8c7f-a16a-4665-9b47-646e724865f9
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-991497d0-2d62-41d9-8f37-b9b93e2a6ad8
STEP: Updating secret s-test-opt-upd-d12b8c7f-a16a-4665-9b47-646e724865f9
STEP: Creating secret with name s-test-opt-create-bd70b6ba-4ce9-49f4-b6db-158801cbedee
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:20:18.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-274" for this suite.

• [SLOW TEST:13.350 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":275,"completed":200,"skipped":3310,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:20:18.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating Agnhost RC
Aug 13 19:20:18.429: INFO: namespace kubectl-8609
Aug 13 19:20:18.429: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8609'
Aug 13 19:20:18.763: INFO: stderr: ""
Aug 13 19:20:18.763: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Aug 13 19:20:19.768: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 13 19:20:19.768: INFO: Found 0 / 1
Aug 13 19:20:21.151: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 13 19:20:21.151: INFO: Found 0 / 1
Aug 13 19:20:21.768: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 13 19:20:21.768: INFO: Found 0 / 1
Aug 13 19:20:22.768: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 13 19:20:22.768: INFO: Found 0 / 1
Aug 13 19:20:23.827: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 13 19:20:23.827: INFO: Found 1 / 1
Aug 13 19:20:23.827: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 13 19:20:23.830: INFO: Selector matched 1 pods for map[app:agnhost]
Aug 13 19:20:23.830: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 13 19:20:23.830: INFO: wait on agnhost-master startup in kubectl-8609 
Aug 13 19:20:23.830: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config logs agnhost-master-b4rnn agnhost-master --namespace=kubectl-8609'
Aug 13 19:20:23.978: INFO: stderr: ""
Aug 13 19:20:23.978: INFO: stdout: "Paused\n"
STEP: exposing RC
Aug 13 19:20:23.978: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-8609'
Aug 13 19:20:24.170: INFO: stderr: ""
Aug 13 19:20:24.170: INFO: stdout: "service/rm2 exposed\n"
Aug 13 19:20:24.672: INFO: Service rm2 in namespace kubectl-8609 found.
STEP: exposing service
Aug 13 19:20:26.679: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-8609'
Aug 13 19:20:26.813: INFO: stderr: ""
Aug 13 19:20:26.813: INFO: stdout: "service/rm3 exposed\n"
Aug 13 19:20:26.834: INFO: Service rm3 in namespace kubectl-8609 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:20:28.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8609" for this suite.

• [SLOW TEST:10.529 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1119
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":275,"completed":201,"skipped":3351,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:20:28.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-7f1f2faa-2583-490d-a1e6-12e42d0a0395
STEP: Creating a pod to test consume configMaps
Aug 13 19:20:28.973: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1fd674a7-586b-411d-86e0-41892ab8db2e" in namespace "projected-2748" to be "Succeeded or Failed"
Aug 13 19:20:28.978: INFO: Pod "pod-projected-configmaps-1fd674a7-586b-411d-86e0-41892ab8db2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.713621ms
Aug 13 19:20:30.982: INFO: Pod "pod-projected-configmaps-1fd674a7-586b-411d-86e0-41892ab8db2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00878355s
Aug 13 19:20:33.246: INFO: Pod "pod-projected-configmaps-1fd674a7-586b-411d-86e0-41892ab8db2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.272641447s
Aug 13 19:20:35.292: INFO: Pod "pod-projected-configmaps-1fd674a7-586b-411d-86e0-41892ab8db2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.318978784s
STEP: Saw pod success
Aug 13 19:20:35.293: INFO: Pod "pod-projected-configmaps-1fd674a7-586b-411d-86e0-41892ab8db2e" satisfied condition "Succeeded or Failed"
Aug 13 19:20:35.295: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-1fd674a7-586b-411d-86e0-41892ab8db2e container projected-configmap-volume-test: 
STEP: delete the pod
Aug 13 19:20:35.572: INFO: Waiting for pod pod-projected-configmaps-1fd674a7-586b-411d-86e0-41892ab8db2e to disappear
Aug 13 19:20:35.618: INFO: Pod pod-projected-configmaps-1fd674a7-586b-411d-86e0-41892ab8db2e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:20:35.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2748" for this suite.

• [SLOW TEST:6.982 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":275,"completed":202,"skipped":3365,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:20:35.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-cb8e8868-6dcf-4e4a-9142-45188001a12a
STEP: Creating a pod to test consume configMaps
Aug 13 19:20:36.369: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c0fb6846-c781-44c8-967f-cde6dcf9d937" in namespace "projected-9509" to be "Succeeded or Failed"
Aug 13 19:20:36.673: INFO: Pod "pod-projected-configmaps-c0fb6846-c781-44c8-967f-cde6dcf9d937": Phase="Pending", Reason="", readiness=false. Elapsed: 303.900978ms
Aug 13 19:20:38.677: INFO: Pod "pod-projected-configmaps-c0fb6846-c781-44c8-967f-cde6dcf9d937": Phase="Pending", Reason="", readiness=false. Elapsed: 2.308076041s
Aug 13 19:20:40.681: INFO: Pod "pod-projected-configmaps-c0fb6846-c781-44c8-967f-cde6dcf9d937": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312498299s
Aug 13 19:20:42.731: INFO: Pod "pod-projected-configmaps-c0fb6846-c781-44c8-967f-cde6dcf9d937": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.362674567s
STEP: Saw pod success
Aug 13 19:20:42.731: INFO: Pod "pod-projected-configmaps-c0fb6846-c781-44c8-967f-cde6dcf9d937" satisfied condition "Succeeded or Failed"
Aug 13 19:20:42.734: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-c0fb6846-c781-44c8-967f-cde6dcf9d937 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 13 19:20:42.767: INFO: Waiting for pod pod-projected-configmaps-c0fb6846-c781-44c8-967f-cde6dcf9d937 to disappear
Aug 13 19:20:43.109: INFO: Pod pod-projected-configmaps-c0fb6846-c781-44c8-967f-cde6dcf9d937 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:20:43.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9509" for this suite.

• [SLOW TEST:7.289 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":275,"completed":203,"skipped":3392,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:20:43.117: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 13 19:20:43.261: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 13 19:20:43.289: INFO: Waiting for terminating namespaces to be deleted...
Aug 13 19:20:43.291: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 13 19:20:43.297: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Aug 13 19:20:43.297: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 13 19:20:43.297: INFO: rally-466602a1-db17uwyh-6xgdb from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:51:36 +0000 UTC (1 container statuses recorded)
Aug 13 19:20:43.297: INFO: 	Container rally-466602a1-db17uwyh ready: false, restart count 0
Aug 13 19:20:43.297: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded)
Aug 13 19:20:43.297: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug 13 19:20:43.297: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Aug 13 19:20:43.297: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 13 19:20:43.297: INFO: agnhost-master-b4rnn from kubectl-8609 started at 2020-08-13 19:20:18 +0000 UTC (1 container statuses recorded)
Aug 13 19:20:43.297: INFO: 	Container agnhost-master ready: false, restart count 0
Aug 13 19:20:43.297: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 13 19:20:43.297: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 13 19:20:43.297: INFO: rally-466602a1-db17uwyh-z26cp from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:59:13 +0000 UTC (1 container statuses recorded)
Aug 13 19:20:43.297: INFO: 	Container rally-466602a1-db17uwyh ready: false, restart count 0
Aug 13 19:20:43.297: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 13 19:20:43.303: INFO: rally-7104017d-j5l4uv4e-0 from c-rally-7104017d-2oejvhl7 started at 2020-08-11 18:51:39 +0000 UTC (1 container statuses recorded)
Aug 13 19:20:43.303: INFO: 	Container rally-7104017d-j5l4uv4e ready: true, restart count 1
Aug 13 19:20:43.303: INFO: rally-6c5ea4be-96nyoha6-75976ff4d6-kqnxh from c-rally-6c5ea4be-pyo3sp3v started at 2020-08-11 18:16:03 +0000 UTC (1 container statuses recorded)
Aug 13 19:20:43.303: INFO: 	Container rally-6c5ea4be-96nyoha6 ready: true, restart count 52
Aug 13 19:20:43.303: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 13 19:20:43.303: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 13 19:20:43.303: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 13 19:20:43.303: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 13 19:20:43.303: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 13 19:20:43.303: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 13 19:20:43.303: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded)
Aug 13 19:20:43.303: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162aea208a5837f8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162aea208d28f768], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:20:44.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4472" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":275,"completed":204,"skipped":3408,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:20:44.328: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 13 19:20:46.369: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 13 19:20:48.679: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943246, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943246, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943247, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943245, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 19:20:50.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943246, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943246, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943247, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943245, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 19:20:52.756: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943246, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943246, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943247, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943245, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 13 19:20:55.720: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:20:56.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9865" for this suite.
STEP: Destroying namespace "webhook-9865-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:12.311 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":275,"completed":205,"skipped":3411,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:20:56.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test override command
Aug 13 19:20:56.763: INFO: Waiting up to 5m0s for pod "client-containers-912124c5-65cf-4249-8dc5-8c94b3bd6344" in namespace "containers-3720" to be "Succeeded or Failed"
Aug 13 19:20:56.843: INFO: Pod "client-containers-912124c5-65cf-4249-8dc5-8c94b3bd6344": Phase="Pending", Reason="", readiness=false. Elapsed: 80.020141ms
Aug 13 19:20:58.953: INFO: Pod "client-containers-912124c5-65cf-4249-8dc5-8c94b3bd6344": Phase="Pending", Reason="", readiness=false. Elapsed: 2.190111625s
Aug 13 19:21:01.229: INFO: Pod "client-containers-912124c5-65cf-4249-8dc5-8c94b3bd6344": Phase="Pending", Reason="", readiness=false. Elapsed: 4.46593112s
Aug 13 19:21:03.271: INFO: Pod "client-containers-912124c5-65cf-4249-8dc5-8c94b3bd6344": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.507392372s
STEP: Saw pod success
Aug 13 19:21:03.271: INFO: Pod "client-containers-912124c5-65cf-4249-8dc5-8c94b3bd6344" satisfied condition "Succeeded or Failed"
Aug 13 19:21:03.363: INFO: Trying to get logs from node kali-worker pod client-containers-912124c5-65cf-4249-8dc5-8c94b3bd6344 container test-container: 
STEP: delete the pod
Aug 13 19:21:03.568: INFO: Waiting for pod client-containers-912124c5-65cf-4249-8dc5-8c94b3bd6344 to disappear
Aug 13 19:21:03.601: INFO: Pod client-containers-912124c5-65cf-4249-8dc5-8c94b3bd6344 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:21:03.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3720" for this suite.

• [SLOW TEST:6.971 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":275,"completed":206,"skipped":3434,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:21:03.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 13 19:21:08.462: INFO: Successfully updated pod "labelsupdate72d4fcce-9649-45c4-b116-4b86c0d1470d"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:21:12.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7999" for this suite.

• [SLOW TEST:8.971 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":207,"skipped":3500,"failed":0}
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:21:12.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 13 19:21:12.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-389f158e-dca6-4666-9d96-e2b141ea1f9f" in namespace "projected-7971" to be "Succeeded or Failed"
Aug 13 19:21:12.867: INFO: Pod "downwardapi-volume-389f158e-dca6-4666-9d96-e2b141ea1f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 32.537424ms
Aug 13 19:21:15.049: INFO: Pod "downwardapi-volume-389f158e-dca6-4666-9d96-e2b141ea1f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.214674702s
Aug 13 19:21:17.151: INFO: Pod "downwardapi-volume-389f158e-dca6-4666-9d96-e2b141ea1f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316403685s
Aug 13 19:21:19.258: INFO: Pod "downwardapi-volume-389f158e-dca6-4666-9d96-e2b141ea1f9f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.423857776s
Aug 13 19:21:21.337: INFO: Pod "downwardapi-volume-389f158e-dca6-4666-9d96-e2b141ea1f9f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.50198507s
STEP: Saw pod success
Aug 13 19:21:21.337: INFO: Pod "downwardapi-volume-389f158e-dca6-4666-9d96-e2b141ea1f9f" satisfied condition "Succeeded or Failed"
Aug 13 19:21:21.339: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-389f158e-dca6-4666-9d96-e2b141ea1f9f container client-container: 
STEP: delete the pod
Aug 13 19:21:21.520: INFO: Waiting for pod downwardapi-volume-389f158e-dca6-4666-9d96-e2b141ea1f9f to disappear
Aug 13 19:21:21.536: INFO: Pod downwardapi-volume-389f158e-dca6-4666-9d96-e2b141ea1f9f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:21:21.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7971" for this suite.

• [SLOW TEST:8.961 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":208,"skipped":3503,"failed":0}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:21:21.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 13 19:21:23.422: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 13 19:21:25.433: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943283, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943283, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943283, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943282, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 13 19:21:28.474: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:21:28.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1971-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:21:29.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3660" for this suite.
STEP: Destroying namespace "webhook-3660-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.271 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":275,"completed":209,"skipped":3505,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:21:29.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:21:46.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-9150" for this suite.

• [SLOW TEST:16.818 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":275,"completed":210,"skipped":3522,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:21:46.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:22:16.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-279" for this suite.
STEP: Destroying namespace "nsdeletetest-7650" for this suite.
Aug 13 19:22:16.940: INFO: Namespace nsdeletetest-7650 was already deleted
STEP: Destroying namespace "nsdeletetest-3473" for this suite.

• [SLOW TEST:30.311 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":275,"completed":211,"skipped":3531,"failed":0}
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:22:16.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-6741
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a new StatefulSet
Aug 13 19:22:17.146: INFO: Found 0 stateful pods, waiting for 3
Aug 13 19:22:27.150: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 13 19:22:27.150: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 13 19:22:27.150: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 13 19:22:37.152: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 13 19:22:37.152: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 13 19:22:37.152: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Aug 13 19:22:37.176: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 13 19:22:47.414: INFO: Updating stateful set ss2
Aug 13 19:22:48.081: INFO: Waiting for Pod statefulset-6741/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Aug 13 19:22:58.771: INFO: Found 2 stateful pods, waiting for 3
Aug 13 19:23:08.777: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 13 19:23:08.777: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 13 19:23:08.777: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 13 19:23:08.801: INFO: Updating stateful set ss2
Aug 13 19:23:08.867: INFO: Waiting for Pod statefulset-6741/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Aug 13 19:23:18.893: INFO: Updating stateful set ss2
Aug 13 19:23:19.037: INFO: Waiting for StatefulSet statefulset-6741/ss2 to complete update
Aug 13 19:23:19.037: INFO: Waiting for Pod statefulset-6741/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Aug 13 19:23:29.045: INFO: Deleting all statefulset in ns statefulset-6741
Aug 13 19:23:29.047: INFO: Scaling statefulset ss2 to 0
Aug 13 19:24:09.098: INFO: Waiting for statefulset status.replicas updated to 0
Aug 13 19:24:09.101: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:24:09.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6741" for this suite.

• [SLOW TEST:112.224 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":275,"completed":212,"skipped":3534,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:24:09.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 13 19:24:14.441: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:24:14.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7787" for this suite.

• [SLOW TEST:5.327 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":213,"skipped":3550,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:24:14.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating the pod
Aug 13 19:24:14.587: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:24:23.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1552" for this suite.

• [SLOW TEST:9.032 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":275,"completed":214,"skipped":3600,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:24:23.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 13 19:24:23.630: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36619893-f651-4a91-a7b0-b72bbc6d14d5" in namespace "projected-6790" to be "Succeeded or Failed"
Aug 13 19:24:23.644: INFO: Pod "downwardapi-volume-36619893-f651-4a91-a7b0-b72bbc6d14d5": Phase="Pending", Reason="", readiness=false. Elapsed: 13.304485ms
Aug 13 19:24:25.647: INFO: Pod "downwardapi-volume-36619893-f651-4a91-a7b0-b72bbc6d14d5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016926082s
Aug 13 19:24:27.669: INFO: Pod "downwardapi-volume-36619893-f651-4a91-a7b0-b72bbc6d14d5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038159392s
STEP: Saw pod success
Aug 13 19:24:27.669: INFO: Pod "downwardapi-volume-36619893-f651-4a91-a7b0-b72bbc6d14d5" satisfied condition "Succeeded or Failed"
Aug 13 19:24:27.672: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-36619893-f651-4a91-a7b0-b72bbc6d14d5 container client-container: 
STEP: delete the pod
Aug 13 19:24:27.714: INFO: Waiting for pod downwardapi-volume-36619893-f651-4a91-a7b0-b72bbc6d14d5 to disappear
Aug 13 19:24:27.829: INFO: Pod downwardapi-volume-36619893-f651-4a91-a7b0-b72bbc6d14d5 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:24:27.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6790" for this suite.
•{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":215,"skipped":3611,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:24:27.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 13 19:24:28.957: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 13 19:24:31.240: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943468, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943468, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943469, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943468, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 13 19:24:34.291: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:24:34.295: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-274-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:24:35.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5987" for this suite.
STEP: Destroying namespace "webhook-5987-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.660 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":275,"completed":216,"skipped":3614,"failed":0}
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:24:35.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:25:14.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8560" for this suite.

• [SLOW TEST:38.648 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:41
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":275,"completed":217,"skipped":3614,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:25:14.148: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 13 19:25:14.256: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0e8e1641-a554-42fd-b921-f1061796b318" in namespace "downward-api-9414" to be "Succeeded or Failed"
Aug 13 19:25:14.259: INFO: Pod "downwardapi-volume-0e8e1641-a554-42fd-b921-f1061796b318": Phase="Pending", Reason="", readiness=false. Elapsed: 3.012739ms
Aug 13 19:25:16.297: INFO: Pod "downwardapi-volume-0e8e1641-a554-42fd-b921-f1061796b318": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040466696s
Aug 13 19:25:18.301: INFO: Pod "downwardapi-volume-0e8e1641-a554-42fd-b921-f1061796b318": Phase="Running", Reason="", readiness=true. Elapsed: 4.044712667s
Aug 13 19:25:20.315: INFO: Pod "downwardapi-volume-0e8e1641-a554-42fd-b921-f1061796b318": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058180562s
STEP: Saw pod success
Aug 13 19:25:20.315: INFO: Pod "downwardapi-volume-0e8e1641-a554-42fd-b921-f1061796b318" satisfied condition "Succeeded or Failed"
Aug 13 19:25:20.319: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-0e8e1641-a554-42fd-b921-f1061796b318 container client-container: 
STEP: delete the pod
Aug 13 19:25:20.549: INFO: Waiting for pod downwardapi-volume-0e8e1641-a554-42fd-b921-f1061796b318 to disappear
Aug 13 19:25:20.552: INFO: Pod downwardapi-volume-0e8e1641-a554-42fd-b921-f1061796b318 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:25:20.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9414" for this suite.

• [SLOW TEST:6.467 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":218,"skipped":3628,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:25:20.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:25:26.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9175" for this suite.

• [SLOW TEST:6.411 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":275,"completed":219,"skipped":3657,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:25:27.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a Namespace [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a Namespace
STEP: patching the Namespace
STEP: get the Namespace and ensuring it has the label
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:25:27.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-3988" for this suite.
STEP: Destroying namespace "nspatchtest-ae586b25-ae1b-4fce-815e-9aef03ceb8c1-8482" for this suite.
•{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":275,"completed":220,"skipped":3673,"failed":0}
SSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:25:28.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:698
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-3873
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-3873
STEP: creating replication controller externalsvc in namespace services-3873
I0813 19:25:28.351690       7 runners.go:190] Created replication controller with name: externalsvc, namespace: services-3873, replica count: 2
I0813 19:25:31.402235       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0813 19:25:34.402475       7 runners.go:190] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Aug 13 19:25:34.448: INFO: Creating new exec pod
Aug 13 19:25:38.529: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config exec --namespace=services-3873 execpoddvk5v -- /bin/sh -x -c nslookup clusterip-service'
Aug 13 19:25:38.736: INFO: stderr: "I0813 19:25:38.665892    3108 log.go:172] (0xc00003a580) (0xc000a5c000) Create stream\nI0813 19:25:38.665958    3108 log.go:172] (0xc00003a580) (0xc000a5c000) Stream added, broadcasting: 1\nI0813 19:25:38.668202    3108 log.go:172] (0xc00003a580) Reply frame received for 1\nI0813 19:25:38.668259    3108 log.go:172] (0xc00003a580) (0xc000bb21e0) Create stream\nI0813 19:25:38.668350    3108 log.go:172] (0xc00003a580) (0xc000bb21e0) Stream added, broadcasting: 3\nI0813 19:25:38.669267    3108 log.go:172] (0xc00003a580) Reply frame received for 3\nI0813 19:25:38.669293    3108 log.go:172] (0xc00003a580) (0xc000bb2280) Create stream\nI0813 19:25:38.669311    3108 log.go:172] (0xc00003a580) (0xc000bb2280) Stream added, broadcasting: 5\nI0813 19:25:38.670055    3108 log.go:172] (0xc00003a580) Reply frame received for 5\nI0813 19:25:38.719392    3108 log.go:172] (0xc00003a580) Data frame received for 5\nI0813 19:25:38.719416    3108 log.go:172] (0xc000bb2280) (5) Data frame handling\nI0813 19:25:38.719433    3108 log.go:172] (0xc000bb2280) (5) Data frame sent\n+ nslookup clusterip-service\nI0813 19:25:38.727441    3108 log.go:172] (0xc00003a580) Data frame received for 3\nI0813 19:25:38.727468    3108 log.go:172] (0xc000bb21e0) (3) Data frame handling\nI0813 19:25:38.727489    3108 log.go:172] (0xc000bb21e0) (3) Data frame sent\nI0813 19:25:38.728255    3108 log.go:172] (0xc00003a580) Data frame received for 3\nI0813 19:25:38.728271    3108 log.go:172] (0xc000bb21e0) (3) Data frame handling\nI0813 19:25:38.728277    3108 log.go:172] (0xc000bb21e0) (3) Data frame sent\nI0813 19:25:38.728705    3108 log.go:172] (0xc00003a580) Data frame received for 5\nI0813 19:25:38.728718    3108 log.go:172] (0xc000bb2280) (5) Data frame handling\nI0813 19:25:38.728980    3108 log.go:172] (0xc00003a580) Data frame received for 3\nI0813 19:25:38.729024    3108 log.go:172] (0xc000bb21e0) (3) Data frame handling\nI0813 19:25:38.730445    3108 log.go:172] (0xc00003a580) Data frame received for 1\nI0813 19:25:38.730459    3108 log.go:172] (0xc000a5c000) (1) Data frame handling\nI0813 19:25:38.730470    3108 log.go:172] (0xc000a5c000) (1) Data frame sent\nI0813 19:25:38.730482    3108 log.go:172] (0xc00003a580) (0xc000a5c000) Stream removed, broadcasting: 1\nI0813 19:25:38.730545    3108 log.go:172] (0xc00003a580) Go away received\nI0813 19:25:38.730827    3108 log.go:172] (0xc00003a580) (0xc000a5c000) Stream removed, broadcasting: 1\nI0813 19:25:38.730840    3108 log.go:172] (0xc00003a580) (0xc000bb21e0) Stream removed, broadcasting: 3\nI0813 19:25:38.730845    3108 log.go:172] (0xc00003a580) (0xc000bb2280) Stream removed, broadcasting: 5\n"
Aug 13 19:25:38.737: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-3873.svc.cluster.local\tcanonical name = externalsvc.services-3873.svc.cluster.local.\nName:\texternalsvc.services-3873.svc.cluster.local\nAddress: 10.98.62.202\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-3873, will wait for the garbage collector to delete the pods
Aug 13 19:25:38.796: INFO: Deleting ReplicationController externalsvc took: 6.528438ms
Aug 13 19:25:39.197: INFO: Terminating ReplicationController externalsvc pods took: 400.307891ms
Aug 13 19:25:53.428: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:25:53.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3873" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:702

• [SLOW TEST:25.418 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":275,"completed":221,"skipped":3679,"failed":0}
S
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:25:53.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating secret with name secret-test-6d94b230-6ce4-4722-950a-5c01fe289c65
STEP: Creating a pod to test consume secrets
Aug 13 19:25:53.579: INFO: Waiting up to 5m0s for pod "pod-secrets-51a656b3-bac2-4fb1-ae19-3b6877b57bc1" in namespace "secrets-7976" to be "Succeeded or Failed"
Aug 13 19:25:53.584: INFO: Pod "pod-secrets-51a656b3-bac2-4fb1-ae19-3b6877b57bc1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.623979ms
Aug 13 19:25:55.900: INFO: Pod "pod-secrets-51a656b3-bac2-4fb1-ae19-3b6877b57bc1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.32061103s
Aug 13 19:25:58.112: INFO: Pod "pod-secrets-51a656b3-bac2-4fb1-ae19-3b6877b57bc1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.532570452s
STEP: Saw pod success
Aug 13 19:25:58.112: INFO: Pod "pod-secrets-51a656b3-bac2-4fb1-ae19-3b6877b57bc1" satisfied condition "Succeeded or Failed"
Aug 13 19:25:58.115: INFO: Trying to get logs from node kali-worker pod pod-secrets-51a656b3-bac2-4fb1-ae19-3b6877b57bc1 container secret-volume-test: 
STEP: delete the pod
Aug 13 19:25:59.047: INFO: Waiting for pod pod-secrets-51a656b3-bac2-4fb1-ae19-3b6877b57bc1 to disappear
Aug 13 19:25:59.189: INFO: Pod pod-secrets-51a656b3-bac2-4fb1-ae19-3b6877b57bc1 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:25:59.189: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7976" for this suite.

• [SLOW TEST:5.727 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:36
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":275,"completed":222,"skipped":3680,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:25:59.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:26:10.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5147" for this suite.

• [SLOW TEST:11.380 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":275,"completed":223,"skipped":3697,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:26:10.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 13 19:26:10.869: INFO: Waiting up to 5m0s for pod "downward-api-69881152-b194-4a7d-adb8-cc644acf1e8b" in namespace "downward-api-7237" to be "Succeeded or Failed"
Aug 13 19:26:10.896: INFO: Pod "downward-api-69881152-b194-4a7d-adb8-cc644acf1e8b": Phase="Pending", Reason="", readiness=false. Elapsed: 27.857269ms
Aug 13 19:26:12.900: INFO: Pod "downward-api-69881152-b194-4a7d-adb8-cc644acf1e8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031746529s
Aug 13 19:26:14.956: INFO: Pod "downward-api-69881152-b194-4a7d-adb8-cc644acf1e8b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087300419s
Aug 13 19:26:16.960: INFO: Pod "downward-api-69881152-b194-4a7d-adb8-cc644acf1e8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.091853676s
STEP: Saw pod success
Aug 13 19:26:16.961: INFO: Pod "downward-api-69881152-b194-4a7d-adb8-cc644acf1e8b" satisfied condition "Succeeded or Failed"
Aug 13 19:26:16.963: INFO: Trying to get logs from node kali-worker pod downward-api-69881152-b194-4a7d-adb8-cc644acf1e8b container dapi-container: 
STEP: delete the pod
Aug 13 19:26:17.006: INFO: Waiting for pod downward-api-69881152-b194-4a7d-adb8-cc644acf1e8b to disappear
Aug 13 19:26:17.023: INFO: Pod downward-api-69881152-b194-4a7d-adb8-cc644acf1e8b no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:26:17.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7237" for this suite.

• [SLOW TEST:6.449 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":275,"completed":224,"skipped":3714,"failed":0}
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:26:17.032: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-320aae88-790b-43bc-8c68-5529e95547ed
STEP: Creating a pod to test consume secrets
Aug 13 19:26:17.132: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4e8fb815-adff-4000-b877-071f58a71809" in namespace "projected-9235" to be "Succeeded or Failed"
Aug 13 19:26:17.170: INFO: Pod "pod-projected-secrets-4e8fb815-adff-4000-b877-071f58a71809": Phase="Pending", Reason="", readiness=false. Elapsed: 37.531297ms
Aug 13 19:26:19.267: INFO: Pod "pod-projected-secrets-4e8fb815-adff-4000-b877-071f58a71809": Phase="Pending", Reason="", readiness=false. Elapsed: 2.134824561s
Aug 13 19:26:21.297: INFO: Pod "pod-projected-secrets-4e8fb815-adff-4000-b877-071f58a71809": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.164540533s
STEP: Saw pod success
Aug 13 19:26:21.297: INFO: Pod "pod-projected-secrets-4e8fb815-adff-4000-b877-071f58a71809" satisfied condition "Succeeded or Failed"
Aug 13 19:26:21.299: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-4e8fb815-adff-4000-b877-071f58a71809 container projected-secret-volume-test: 
STEP: delete the pod
Aug 13 19:26:21.386: INFO: Waiting for pod pod-projected-secrets-4e8fb815-adff-4000-b877-071f58a71809 to disappear
Aug 13 19:26:21.440: INFO: Pod pod-projected-secrets-4e8fb815-adff-4000-b877-071f58a71809 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:26:21.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9235" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":225,"skipped":3718,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:26:21.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 13 19:26:21.855: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 13 19:26:23.871: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943581, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943581, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943581, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943581, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 13 19:26:26.933: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:26:37.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5185" for this suite.
STEP: Destroying namespace "webhook-5185-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:16.000 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":275,"completed":226,"skipped":3788,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:26:37.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:26:37.564: INFO: Creating deployment "test-recreate-deployment"
Aug 13 19:26:37.581: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Aug 13 19:26:37.671: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Aug 13 19:26:39.809: INFO: Waiting deployment "test-recreate-deployment" to complete
Aug 13 19:26:39.881: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943597, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943597, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943597, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943597, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-74d98b5f7c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 19:26:41.885: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Aug 13 19:26:41.890: INFO: Updating deployment test-recreate-deployment
Aug 13 19:26:41.890: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 13 19:26:43.456: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-1512 /apis/apps/v1/namespaces/deployment-1512/deployments/test-recreate-deployment bcb648ae-a175-4fd7-bbe8-a548e22c6816 9294246 2 2020-08-13 19:26:37 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  [{e2e.test Update apps/v1 2020-08-13 19:26:41 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-13 19:26:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 110 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004a55ee8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-08-13 19:26:43 +0000 UTC,LastTransitionTime:2020-08-13 19:26:43 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-d5667d9c7" is progressing.,LastUpdateTime:2020-08-13 19:26:43 +0000 UTC,LastTransitionTime:2020-08-13 19:26:37 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Aug 13 19:26:43.543: INFO: New ReplicaSet "test-recreate-deployment-d5667d9c7" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-d5667d9c7  deployment-1512 /apis/apps/v1/namespaces/deployment-1512/replicasets/test-recreate-deployment-d5667d9c7 f5b1d0f4-d5ed-4858-9800-5c93d0449046 9294245 1 2020-08-13 19:26:42 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment bcb648ae-a175-4fd7-bbe8-a548e22c6816 0xc0032d8420 0xc0032d8421}] []  [{kube-controller-manager Update apps/v1 2020-08-13 19:26:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 99 98 54 52 56 97 101 45 97 49 55 53 45 52 102 100 55 45 98 98 101 56 45 97 53 52 56 101 50 50 99 54 56 49 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: d5667d9c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032d8498  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 13 19:26:43.543: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Aug 13 19:26:43.544: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-74d98b5f7c  deployment-1512 /apis/apps/v1/namespaces/deployment-1512/replicasets/test-recreate-deployment-74d98b5f7c 64081f7d-196c-4bdc-a9af-8124cc518b68 9294235 2 2020-08-13 19:26:37 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment bcb648ae-a175-4fd7-bbe8-a548e22c6816 0xc0032d8327 0xc0032d8328}] []  [{kube-controller-manager Update apps/v1 2020-08-13 19:26:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 98 99 98 54 52 56 97 101 45 97 49 55 53 45 52 102 100 55 45 98 98 101 56 45 97 53 52 56 101 50 50 99 54 56 49 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 74d98b5f7c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:74d98b5f7c] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0032d83b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 13 19:26:43.624: INFO: Pod "test-recreate-deployment-d5667d9c7-wjqxd" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-d5667d9c7-wjqxd test-recreate-deployment-d5667d9c7- deployment-1512 /api/v1/namespaces/deployment-1512/pods/test-recreate-deployment-d5667d9c7-wjqxd a7eae8be-243d-4a48-b1a0-8e64db506ef9 9294248 0 2020-08-13 19:26:42 +0000 UTC   map[name:sample-pod-3 pod-template-hash:d5667d9c7] map[] [{apps/v1 ReplicaSet test-recreate-deployment-d5667d9c7 f5b1d0f4-d5ed-4858-9800-5c93d0449046 0xc0032d8960 0xc0032d8961}] []  [{kube-controller-manager Update v1 2020-08-13 19:26:42 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 102 53 98 49 100 48 102 52 45 100 53 101 100 45 52 56 53 56 45 57 56 48 48 45 53 99 57 51 100 48 52 52 57 48 52 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 19:26:43 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hfr94,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hfr94,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hfr94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 19:26:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 19:26:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 19:26:43 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 19:26:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:,StartTime:2020-08-13 19:26:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:26:43.624: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1512" for this suite.

• [SLOW TEST:6.288 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":227,"skipped":3811,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:26:43.736: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:26:43.912: INFO: Waiting up to 5m0s for pod "busybox-user-65534-7a8f13b8-e72c-4afc-999b-6f2dea8d8970" in namespace "security-context-test-681" to be "Succeeded or Failed"
Aug 13 19:26:43.985: INFO: Pod "busybox-user-65534-7a8f13b8-e72c-4afc-999b-6f2dea8d8970": Phase="Pending", Reason="", readiness=false. Elapsed: 72.617333ms
Aug 13 19:26:46.148: INFO: Pod "busybox-user-65534-7a8f13b8-e72c-4afc-999b-6f2dea8d8970": Phase="Pending", Reason="", readiness=false. Elapsed: 2.235359929s
Aug 13 19:26:49.055: INFO: Pod "busybox-user-65534-7a8f13b8-e72c-4afc-999b-6f2dea8d8970": Phase="Pending", Reason="", readiness=false. Elapsed: 5.142569059s
Aug 13 19:26:51.067: INFO: Pod "busybox-user-65534-7a8f13b8-e72c-4afc-999b-6f2dea8d8970": Phase="Pending", Reason="", readiness=false. Elapsed: 7.155207501s
Aug 13 19:26:53.862: INFO: Pod "busybox-user-65534-7a8f13b8-e72c-4afc-999b-6f2dea8d8970": Phase="Pending", Reason="", readiness=false. Elapsed: 9.949352478s
Aug 13 19:26:55.878: INFO: Pod "busybox-user-65534-7a8f13b8-e72c-4afc-999b-6f2dea8d8970": Phase="Running", Reason="", readiness=true. Elapsed: 11.965437416s
Aug 13 19:26:58.058: INFO: Pod "busybox-user-65534-7a8f13b8-e72c-4afc-999b-6f2dea8d8970": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.14542486s
Aug 13 19:26:58.058: INFO: Pod "busybox-user-65534-7a8f13b8-e72c-4afc-999b-6f2dea8d8970" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:26:58.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-681" for this suite.

• [SLOW TEST:14.416 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":228,"skipped":3881,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:26:58.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:27:06.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6041" for this suite.

• [SLOW TEST:8.388 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":275,"completed":229,"skipped":3903,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:27:06.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1418
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Aug 13 19:27:06.881: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3807'
Aug 13 19:27:06.992: INFO: stderr: ""
Aug 13 19:27:06.992: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1423
Aug 13 19:27:07.023: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3807'
Aug 13 19:27:13.465: INFO: stderr: ""
Aug 13 19:27:13.465: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:27:13.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3807" for this suite.

• [SLOW TEST:7.218 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1414
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":275,"completed":230,"skipped":3930,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:27:13.759: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 13 19:27:19.681: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:27:19.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5757" for this suite.

• [SLOW TEST:6.108 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:133
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":231,"skipped":3958,"failed":0}
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:27:19.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:27:20.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Aug 13 19:27:20.775: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-13T19:27:20Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-13T19:27:20Z]] name:name1 resourceVersion:9294476 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:7db9e3e2-f055-4d0c-b86b-465e227828a0] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Aug 13 19:27:30.782: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-13T19:27:30Z generation:1 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-13T19:27:30Z]] name:name2 resourceVersion:9294514 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:43fa645c-bb9e-4744-91b1-6856283b4907] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Aug 13 19:27:40.811: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-13T19:27:20Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-13T19:27:40Z]] name:name1 resourceVersion:9294544 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:7db9e3e2-f055-4d0c-b86b-465e227828a0] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Aug 13 19:27:50.818: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-13T19:27:30Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-13T19:27:50Z]] name:name2 resourceVersion:9294572 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:43fa645c-bb9e-4744-91b1-6856283b4907] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Aug 13 19:28:00.827: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-13T19:27:20Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-13T19:27:40Z]] name:name1 resourceVersion:9294602 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:7db9e3e2-f055-4d0c-b86b-465e227828a0] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Aug 13 19:28:10.834: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-08-13T19:27:30Z generation:2 managedFields:[map[apiVersion:mygroup.example.com/v1beta1 fieldsType:FieldsV1 fieldsV1:map[f:content:map[.:map[] f:key:map[]] f:dummy:map[] f:num:map[.:map[] f:num1:map[] f:num2:map[]]] manager:e2e.test operation:Update time:2020-08-13T19:27:50Z]] name:name2 resourceVersion:9294629 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:43fa645c-bb9e-4744-91b1-6856283b4907] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:28:21.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-8939" for this suite.

• [SLOW TEST:61.651 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":275,"completed":232,"skipped":3958,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:28:21.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:134
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 13 19:28:22.076: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:28:22.208: INFO: Number of nodes with available pods: 0
Aug 13 19:28:22.208: INFO: Node kali-worker is running more than one daemon pod
Aug 13 19:28:23.213: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:28:23.216: INFO: Number of nodes with available pods: 0
Aug 13 19:28:23.216: INFO: Node kali-worker is running more than one daemon pod
Aug 13 19:28:24.213: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:28:24.217: INFO: Number of nodes with available pods: 0
Aug 13 19:28:24.217: INFO: Node kali-worker is running more than one daemon pod
Aug 13 19:28:25.213: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:28:25.217: INFO: Number of nodes with available pods: 0
Aug 13 19:28:25.217: INFO: Node kali-worker is running more than one daemon pod
Aug 13 19:28:26.216: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:28:26.220: INFO: Number of nodes with available pods: 0
Aug 13 19:28:26.220: INFO: Node kali-worker is running more than one daemon pod
Aug 13 19:28:27.243: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:28:27.837: INFO: Number of nodes with available pods: 1
Aug 13 19:28:27.837: INFO: Node kali-worker is running more than one daemon pod
Aug 13 19:28:28.360: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:28:28.482: INFO: Number of nodes with available pods: 1
Aug 13 19:28:28.482: INFO: Node kali-worker is running more than one daemon pod
Aug 13 19:28:29.213: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:28:29.217: INFO: Number of nodes with available pods: 2
Aug 13 19:28:29.217: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 13 19:28:29.377: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:28:29.515: INFO: Number of nodes with available pods: 1
Aug 13 19:28:29.515: INFO: Node kali-worker2 is running more than one daemon pod
Aug 13 19:28:30.989: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:28:30.992: INFO: Number of nodes with available pods: 1
Aug 13 19:28:30.992: INFO: Node kali-worker2 is running more than one daemon pod
Aug 13 19:28:31.520: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:28:31.524: INFO: Number of nodes with available pods: 1
Aug 13 19:28:31.524: INFO: Node kali-worker2 is running more than one daemon pod
Aug 13 19:28:32.736: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:28:32.740: INFO: Number of nodes with available pods: 1
Aug 13 19:28:32.740: INFO: Node kali-worker2 is running more than one daemon pod
Aug 13 19:28:33.521: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:28:33.525: INFO: Number of nodes with available pods: 1
Aug 13 19:28:33.525: INFO: Node kali-worker2 is running more than one daemon pod
Aug 13 19:28:34.527: INFO: DaemonSet pods can't tolerate node kali-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 13 19:28:34.532: INFO: Number of nodes with available pods: 2
Aug 13 19:28:34.532: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:100
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3596, will wait for the garbage collector to delete the pods
Aug 13 19:28:34.597: INFO: Deleting DaemonSet.extensions daemon-set took: 6.925496ms
Aug 13 19:28:34.899: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.669139ms
Aug 13 19:28:45.603: INFO: Number of nodes with available pods: 0
Aug 13 19:28:45.603: INFO: Number of running nodes: 0, number of available pods: 0
Aug 13 19:28:45.605: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3596/daemonsets","resourceVersion":"9294786"},"items":null}

Aug 13 19:28:45.645: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3596/pods","resourceVersion":"9294786"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:28:45.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3596" for this suite.

• [SLOW TEST:24.143 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":275,"completed":233,"skipped":3991,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:28:45.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-map-72db93e1-7abf-4e9a-9e0d-1c1baaa025df
STEP: Creating a pod to test consume configMaps
Aug 13 19:28:45.786: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-805a911d-5b84-4969-b224-9358329bca8e" in namespace "projected-5125" to be "Succeeded or Failed"
Aug 13 19:28:45.975: INFO: Pod "pod-projected-configmaps-805a911d-5b84-4969-b224-9358329bca8e": Phase="Pending", Reason="", readiness=false. Elapsed: 188.672567ms
Aug 13 19:28:47.993: INFO: Pod "pod-projected-configmaps-805a911d-5b84-4969-b224-9358329bca8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.206166252s
Aug 13 19:28:50.137: INFO: Pod "pod-projected-configmaps-805a911d-5b84-4969-b224-9358329bca8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.350329251s
Aug 13 19:28:52.141: INFO: Pod "pod-projected-configmaps-805a911d-5b84-4969-b224-9358329bca8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.354295511s
STEP: Saw pod success
Aug 13 19:28:52.141: INFO: Pod "pod-projected-configmaps-805a911d-5b84-4969-b224-9358329bca8e" satisfied condition "Succeeded or Failed"
Aug 13 19:28:52.144: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-805a911d-5b84-4969-b224-9358329bca8e container projected-configmap-volume-test: 
STEP: delete the pod
Aug 13 19:28:52.228: INFO: Waiting for pod pod-projected-configmaps-805a911d-5b84-4969-b224-9358329bca8e to disappear
Aug 13 19:28:52.238: INFO: Pod pod-projected-configmaps-805a911d-5b84-4969-b224-9358329bca8e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:28:52.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5125" for this suite.

• [SLOW TEST:6.584 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":275,"completed":234,"skipped":3996,"failed":0}
SSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:28:52.246: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:28:52.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-1647
I0813 19:28:52.358915       7 runners.go:190] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1647, replica count: 1
I0813 19:28:53.409363       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0813 19:28:54.409569       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0813 19:28:55.409761       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0813 19:28:56.410010       7 runners.go:190] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 13 19:28:56.579: INFO: Created: latency-svc-lbbsc
Aug 13 19:28:56.600: INFO: Got endpoints: latency-svc-lbbsc [89.856105ms]
Aug 13 19:28:56.647: INFO: Created: latency-svc-x9ppk
Aug 13 19:28:56.665: INFO: Got endpoints: latency-svc-x9ppk [65.341267ms]
Aug 13 19:28:56.717: INFO: Created: latency-svc-sj8cw
Aug 13 19:28:56.724: INFO: Got endpoints: latency-svc-sj8cw [123.949837ms]
Aug 13 19:28:56.874: INFO: Created: latency-svc-c6r97
Aug 13 19:28:56.880: INFO: Got endpoints: latency-svc-c6r97 [280.31193ms]
Aug 13 19:28:56.935: INFO: Created: latency-svc-hqmxw
Aug 13 19:28:56.955: INFO: Got endpoints: latency-svc-hqmxw [355.064756ms]
Aug 13 19:28:57.007: INFO: Created: latency-svc-p2wq2
Aug 13 19:28:57.027: INFO: Got endpoints: latency-svc-p2wq2 [427.214752ms]
Aug 13 19:28:57.064: INFO: Created: latency-svc-6ggf6
Aug 13 19:28:57.137: INFO: Got endpoints: latency-svc-6ggf6 [536.697953ms]
Aug 13 19:28:57.199: INFO: Created: latency-svc-27vww
Aug 13 19:28:57.211: INFO: Got endpoints: latency-svc-27vww [611.28494ms]
Aug 13 19:28:57.281: INFO: Created: latency-svc-87rhb
Aug 13 19:28:57.308: INFO: Got endpoints: latency-svc-87rhb [707.881553ms]
Aug 13 19:28:57.333: INFO: Created: latency-svc-pg58z
Aug 13 19:28:57.479: INFO: Got endpoints: latency-svc-pg58z [878.800955ms]
Aug 13 19:28:57.495: INFO: Created: latency-svc-ph46j
Aug 13 19:28:57.555: INFO: Got endpoints: latency-svc-ph46j [955.104371ms]
Aug 13 19:28:57.640: INFO: Created: latency-svc-zb2mn
Aug 13 19:28:57.653: INFO: Got endpoints: latency-svc-zb2mn [1.052685226s]
Aug 13 19:28:57.703: INFO: Created: latency-svc-gvrhq
Aug 13 19:28:57.712: INFO: Got endpoints: latency-svc-gvrhq [1.11190906s]
Aug 13 19:28:57.763: INFO: Created: latency-svc-x9whz
Aug 13 19:28:57.773: INFO: Got endpoints: latency-svc-x9whz [1.172966214s]
Aug 13 19:28:57.811: INFO: Created: latency-svc-tr46l
Aug 13 19:28:57.827: INFO: Got endpoints: latency-svc-tr46l [1.227005055s]
Aug 13 19:28:57.909: INFO: Created: latency-svc-lv8t9
Aug 13 19:28:57.917: INFO: Got endpoints: latency-svc-lv8t9 [1.317400736s]
Aug 13 19:28:57.943: INFO: Created: latency-svc-njr7r
Aug 13 19:28:57.960: INFO: Got endpoints: latency-svc-njr7r [1.294788203s]
Aug 13 19:28:57.985: INFO: Created: latency-svc-vtd9h
Aug 13 19:28:58.003: INFO: Got endpoints: latency-svc-vtd9h [1.279141417s]
Aug 13 19:28:58.048: INFO: Created: latency-svc-7v4p5
Aug 13 19:28:58.053: INFO: Got endpoints: latency-svc-7v4p5 [1.172600375s]
Aug 13 19:28:58.075: INFO: Created: latency-svc-62m7c
Aug 13 19:28:58.094: INFO: Got endpoints: latency-svc-62m7c [1.138655514s]
Aug 13 19:28:58.132: INFO: Created: latency-svc-pwnh8
Aug 13 19:28:58.190: INFO: Got endpoints: latency-svc-pwnh8 [1.163090758s]
Aug 13 19:28:58.255: INFO: Created: latency-svc-jp4wr
Aug 13 19:28:58.274: INFO: Got endpoints: latency-svc-jp4wr [1.137565356s]
Aug 13 19:28:58.346: INFO: Created: latency-svc-smh27
Aug 13 19:28:58.393: INFO: Got endpoints: latency-svc-smh27 [1.181846543s]
Aug 13 19:28:58.508: INFO: Created: latency-svc-m2q72
Aug 13 19:28:58.512: INFO: Got endpoints: latency-svc-m2q72 [1.204413782s]
Aug 13 19:28:58.579: INFO: Created: latency-svc-2wnc4
Aug 13 19:28:58.665: INFO: Got endpoints: latency-svc-2wnc4 [1.186646172s]
Aug 13 19:28:58.741: INFO: Created: latency-svc-hhg62
Aug 13 19:28:58.756: INFO: Got endpoints: latency-svc-hhg62 [1.200709925s]
Aug 13 19:28:58.858: INFO: Created: latency-svc-5zfps
Aug 13 19:28:58.882: INFO: Got endpoints: latency-svc-5zfps [1.229262056s]
Aug 13 19:28:58.945: INFO: Created: latency-svc-dvhv2
Aug 13 19:28:58.949: INFO: Got endpoints: latency-svc-dvhv2 [1.236953957s]
Aug 13 19:28:59.035: INFO: Created: latency-svc-tgpwh
Aug 13 19:28:59.152: INFO: Got endpoints: latency-svc-tgpwh [1.378643977s]
Aug 13 19:28:59.844: INFO: Created: latency-svc-vvng2
Aug 13 19:28:59.848: INFO: Got endpoints: latency-svc-vvng2 [2.021357508s]
Aug 13 19:29:00.101: INFO: Created: latency-svc-jpq4p
Aug 13 19:29:00.366: INFO: Got endpoints: latency-svc-jpq4p [2.448644481s]
Aug 13 19:29:00.569: INFO: Created: latency-svc-2c2f4
Aug 13 19:29:00.744: INFO: Got endpoints: latency-svc-2c2f4 [2.784331354s]
Aug 13 19:29:00.792: INFO: Created: latency-svc-mcjg4
Aug 13 19:29:00.809: INFO: Got endpoints: latency-svc-mcjg4 [2.805696765s]
Aug 13 19:29:00.891: INFO: Created: latency-svc-2jxt5
Aug 13 19:29:00.895: INFO: Got endpoints: latency-svc-2jxt5 [2.842693685s]
Aug 13 19:29:01.103: INFO: Created: latency-svc-tfpgt
Aug 13 19:29:01.146: INFO: Got endpoints: latency-svc-tfpgt [3.052476847s]
Aug 13 19:29:01.257: INFO: Created: latency-svc-nsdrv
Aug 13 19:29:01.290: INFO: Got endpoints: latency-svc-nsdrv [3.099798958s]
Aug 13 19:29:01.364: INFO: Created: latency-svc-26s7f
Aug 13 19:29:01.369: INFO: Got endpoints: latency-svc-26s7f [3.094180889s]
Aug 13 19:29:01.514: INFO: Created: latency-svc-tffv5
Aug 13 19:29:01.530: INFO: Got endpoints: latency-svc-tffv5 [3.136738453s]
Aug 13 19:29:01.566: INFO: Created: latency-svc-xlszr
Aug 13 19:29:01.608: INFO: Got endpoints: latency-svc-xlszr [3.095717979s]
Aug 13 19:29:01.670: INFO: Created: latency-svc-jsw82
Aug 13 19:29:01.673: INFO: Got endpoints: latency-svc-jsw82 [3.00790658s]
Aug 13 19:29:01.731: INFO: Created: latency-svc-72ltp
Aug 13 19:29:01.747: INFO: Got endpoints: latency-svc-72ltp [2.990748167s]
Aug 13 19:29:01.813: INFO: Created: latency-svc-4z6k2
Aug 13 19:29:01.813: INFO: Got endpoints: latency-svc-4z6k2 [2.930609132s]
Aug 13 19:29:01.866: INFO: Created: latency-svc-5hfqf
Aug 13 19:29:01.886: INFO: Got endpoints: latency-svc-5hfqf [2.936848492s]
Aug 13 19:29:01.986: INFO: Created: latency-svc-ljk4m
Aug 13 19:29:01.993: INFO: Got endpoints: latency-svc-ljk4m [2.841597546s]
Aug 13 19:29:02.013: INFO: Created: latency-svc-xwsgz
Aug 13 19:29:02.026: INFO: Got endpoints: latency-svc-xwsgz [2.17700011s]
Aug 13 19:29:02.125: INFO: Created: latency-svc-z7t8b
Aug 13 19:29:02.133: INFO: Got endpoints: latency-svc-z7t8b [1.766574095s]
Aug 13 19:29:02.153: INFO: Created: latency-svc-zjfh9
Aug 13 19:29:02.163: INFO: Got endpoints: latency-svc-zjfh9 [1.4188743s]
Aug 13 19:29:02.193: INFO: Created: latency-svc-2x62v
Aug 13 19:29:02.269: INFO: Got endpoints: latency-svc-2x62v [1.459973975s]
Aug 13 19:29:02.282: INFO: Created: latency-svc-qntf6
Aug 13 19:29:02.302: INFO: Got endpoints: latency-svc-qntf6 [1.406419027s]
Aug 13 19:29:02.321: INFO: Created: latency-svc-kl442
Aug 13 19:29:02.341: INFO: Got endpoints: latency-svc-kl442 [1.194544674s]
Aug 13 19:29:02.430: INFO: Created: latency-svc-2km7v
Aug 13 19:29:02.463: INFO: Got endpoints: latency-svc-2km7v [1.172137198s]
Aug 13 19:29:02.464: INFO: Created: latency-svc-9zbf8
Aug 13 19:29:02.511: INFO: Got endpoints: latency-svc-9zbf8 [1.141842408s]
Aug 13 19:29:02.568: INFO: Created: latency-svc-fj2b5
Aug 13 19:29:02.572: INFO: Got endpoints: latency-svc-fj2b5 [1.042606866s]
Aug 13 19:29:02.601: INFO: Created: latency-svc-lgv8x
Aug 13 19:29:02.627: INFO: Got endpoints: latency-svc-lgv8x [1.018692699s]
Aug 13 19:29:02.663: INFO: Created: latency-svc-95qx4
Aug 13 19:29:02.706: INFO: Got endpoints: latency-svc-95qx4 [1.032881743s]
Aug 13 19:29:02.729: INFO: Created: latency-svc-mdnjr
Aug 13 19:29:02.748: INFO: Got endpoints: latency-svc-mdnjr [1.001678632s]
Aug 13 19:29:02.786: INFO: Created: latency-svc-rbzmb
Aug 13 19:29:02.803: INFO: Got endpoints: latency-svc-rbzmb [989.898671ms]
Aug 13 19:29:02.880: INFO: Created: latency-svc-d64gl
Aug 13 19:29:02.903: INFO: Got endpoints: latency-svc-d64gl [1.01692108s]
Aug 13 19:29:02.958: INFO: Created: latency-svc-8rp7m
Aug 13 19:29:02.965: INFO: Got endpoints: latency-svc-8rp7m [971.845102ms]
Aug 13 19:29:03.047: INFO: Created: latency-svc-xrsqw
Aug 13 19:29:03.056: INFO: Got endpoints: latency-svc-xrsqw [1.030856846s]
Aug 13 19:29:03.107: INFO: Created: latency-svc-plhvc
Aug 13 19:29:03.122: INFO: Got endpoints: latency-svc-plhvc [989.207044ms]
Aug 13 19:29:03.190: INFO: Created: latency-svc-przg8
Aug 13 19:29:03.197: INFO: Got endpoints: latency-svc-przg8 [1.03361809s]
Aug 13 19:29:03.218: INFO: Created: latency-svc-ts8cf
Aug 13 19:29:03.237: INFO: Got endpoints: latency-svc-ts8cf [968.41427ms]
Aug 13 19:29:03.278: INFO: Created: latency-svc-g2lx9
Aug 13 19:29:03.346: INFO: Got endpoints: latency-svc-g2lx9 [1.044027192s]
Aug 13 19:29:03.371: INFO: Created: latency-svc-hxmz7
Aug 13 19:29:03.388: INFO: Got endpoints: latency-svc-hxmz7 [1.047160009s]
Aug 13 19:29:03.484: INFO: Created: latency-svc-l7zlt
Aug 13 19:29:03.487: INFO: Got endpoints: latency-svc-l7zlt [1.024462009s]
Aug 13 19:29:03.521: INFO: Created: latency-svc-724w2
Aug 13 19:29:03.539: INFO: Got endpoints: latency-svc-724w2 [1.028239344s]
Aug 13 19:29:03.561: INFO: Created: latency-svc-lxl56
Aug 13 19:29:03.579: INFO: Got endpoints: latency-svc-lxl56 [1.005967742s]
Aug 13 19:29:03.640: INFO: Created: latency-svc-bdcgv
Aug 13 19:29:03.648: INFO: Got endpoints: latency-svc-bdcgv [1.020706605s]
Aug 13 19:29:03.677: INFO: Created: latency-svc-ppt58
Aug 13 19:29:03.697: INFO: Got endpoints: latency-svc-ppt58 [990.128271ms]
Aug 13 19:29:03.719: INFO: Created: latency-svc-lhlpl
Aug 13 19:29:03.789: INFO: Got endpoints: latency-svc-lhlpl [1.040892245s]
Aug 13 19:29:03.807: INFO: Created: latency-svc-fw2qj
Aug 13 19:29:03.817: INFO: Got endpoints: latency-svc-fw2qj [1.01426203s]
Aug 13 19:29:03.842: INFO: Created: latency-svc-47bpb
Aug 13 19:29:03.853: INFO: Got endpoints: latency-svc-47bpb [950.234074ms]
Aug 13 19:29:03.945: INFO: Created: latency-svc-lsl8h
Aug 13 19:29:03.963: INFO: Got endpoints: latency-svc-lsl8h [997.970743ms]
Aug 13 19:29:04.007: INFO: Created: latency-svc-jv8n2
Aug 13 19:29:04.022: INFO: Got endpoints: latency-svc-jv8n2 [965.904456ms]
Aug 13 19:29:04.118: INFO: Created: latency-svc-pfk9k
Aug 13 19:29:04.125: INFO: Got endpoints: latency-svc-pfk9k [1.002740795s]
Aug 13 19:29:04.169: INFO: Created: latency-svc-px7p7
Aug 13 19:29:04.274: INFO: Got endpoints: latency-svc-px7p7 [1.0771606s]
Aug 13 19:29:04.290: INFO: Created: latency-svc-zzghc
Aug 13 19:29:04.305: INFO: Got endpoints: latency-svc-zzghc [1.068112146s]
Aug 13 19:29:04.347: INFO: Created: latency-svc-wn4j5
Aug 13 19:29:04.436: INFO: Got endpoints: latency-svc-wn4j5 [1.090135379s]
Aug 13 19:29:04.451: INFO: Created: latency-svc-wrl5v
Aug 13 19:29:04.475: INFO: Got endpoints: latency-svc-wrl5v [1.086614794s]
Aug 13 19:29:04.587: INFO: Created: latency-svc-b9ddp
Aug 13 19:29:04.599: INFO: Got endpoints: latency-svc-b9ddp [1.111484069s]
Aug 13 19:29:04.635: INFO: Created: latency-svc-wmn7d
Aug 13 19:29:04.649: INFO: Got endpoints: latency-svc-wmn7d [1.109761865s]
Aug 13 19:29:04.683: INFO: Created: latency-svc-f4xbf
Aug 13 19:29:04.768: INFO: Got endpoints: latency-svc-f4xbf [1.18962392s]
Aug 13 19:29:04.775: INFO: Created: latency-svc-hf2fv
Aug 13 19:29:04.794: INFO: Got endpoints: latency-svc-hf2fv [1.146645083s]
Aug 13 19:29:04.833: INFO: Created: latency-svc-cr9wf
Aug 13 19:29:04.863: INFO: Got endpoints: latency-svc-cr9wf [1.16621843s]
Aug 13 19:29:04.933: INFO: Created: latency-svc-t858t
Aug 13 19:29:04.938: INFO: Got endpoints: latency-svc-t858t [1.148682506s]
Aug 13 19:29:04.969: INFO: Created: latency-svc-z2xwm
Aug 13 19:29:04.987: INFO: Got endpoints: latency-svc-z2xwm [1.169734181s]
Aug 13 19:29:05.020: INFO: Created: latency-svc-klt8p
Aug 13 19:29:05.072: INFO: Got endpoints: latency-svc-klt8p [1.218648304s]
Aug 13 19:29:05.089: INFO: Created: latency-svc-7hrs7
Aug 13 19:29:05.096: INFO: Got endpoints: latency-svc-7hrs7 [1.132312834s]
Aug 13 19:29:05.140: INFO: Created: latency-svc-hmbmw
Aug 13 19:29:05.292: INFO: Got endpoints: latency-svc-hmbmw [1.270007771s]
Aug 13 19:29:05.319: INFO: Created: latency-svc-f2mrh
Aug 13 19:29:05.359: INFO: Got endpoints: latency-svc-f2mrh [1.234576609s]
Aug 13 19:29:05.436: INFO: Created: latency-svc-d5j7n
Aug 13 19:29:05.439: INFO: Got endpoints: latency-svc-d5j7n [1.164835807s]
Aug 13 19:29:05.529: INFO: Created: latency-svc-79xqz
Aug 13 19:29:05.579: INFO: Got endpoints: latency-svc-79xqz [1.274177821s]
Aug 13 19:29:05.601: INFO: Created: latency-svc-gj9mt
Aug 13 19:29:05.614: INFO: Got endpoints: latency-svc-gj9mt [1.177515264s]
Aug 13 19:29:05.637: INFO: Created: latency-svc-2kf95
Aug 13 19:29:05.663: INFO: Got endpoints: latency-svc-2kf95 [1.188851795s]
Aug 13 19:29:05.723: INFO: Created: latency-svc-g2c7z
Aug 13 19:29:05.741: INFO: Got endpoints: latency-svc-g2c7z [1.142344323s]
Aug 13 19:29:05.765: INFO: Created: latency-svc-ndpv4
Aug 13 19:29:05.783: INFO: Got endpoints: latency-svc-ndpv4 [1.134347574s]
Aug 13 19:29:05.805: INFO: Created: latency-svc-vlfb2
Aug 13 19:29:05.867: INFO: Got endpoints: latency-svc-vlfb2 [1.098547666s]
Aug 13 19:29:05.889: INFO: Created: latency-svc-rg52j
Aug 13 19:29:05.898: INFO: Got endpoints: latency-svc-rg52j [1.10365766s]
Aug 13 19:29:05.922: INFO: Created: latency-svc-rrkk5
Aug 13 19:29:05.941: INFO: Got endpoints: latency-svc-rrkk5 [1.077944773s]
Aug 13 19:29:05.963: INFO: Created: latency-svc-wzhx5
Aug 13 19:29:06.005: INFO: Got endpoints: latency-svc-wzhx5 [1.066946017s]
Aug 13 19:29:06.038: INFO: Created: latency-svc-q5ncm
Aug 13 19:29:06.068: INFO: Got endpoints: latency-svc-q5ncm [1.080908197s]
Aug 13 19:29:06.155: INFO: Created: latency-svc-s66g9
Aug 13 19:29:06.164: INFO: Got endpoints: latency-svc-s66g9 [1.091881647s]
Aug 13 19:29:06.209: INFO: Created: latency-svc-6gh6f
Aug 13 19:29:06.225: INFO: Got endpoints: latency-svc-6gh6f [1.129606811s]
Aug 13 19:29:06.251: INFO: Created: latency-svc-twvz2
Aug 13 19:29:06.313: INFO: Got endpoints: latency-svc-twvz2 [1.019997673s]
Aug 13 19:29:06.351: INFO: Created: latency-svc-w9j9h
Aug 13 19:29:06.370: INFO: Got endpoints: latency-svc-w9j9h [1.010862946s]
Aug 13 19:29:06.392: INFO: Created: latency-svc-dsxqm
Aug 13 19:29:06.449: INFO: Got endpoints: latency-svc-dsxqm [1.009703189s]
Aug 13 19:29:06.500: INFO: Created: latency-svc-hqj59
Aug 13 19:29:06.652: INFO: Got endpoints: latency-svc-hqj59 [1.072392753s]
Aug 13 19:29:06.669: INFO: Created: latency-svc-grbw4
Aug 13 19:29:06.702: INFO: Got endpoints: latency-svc-grbw4 [1.088051286s]
Aug 13 19:29:06.749: INFO: Created: latency-svc-vnrwd
Aug 13 19:29:06.795: INFO: Got endpoints: latency-svc-vnrwd [1.131345819s]
Aug 13 19:29:06.813: INFO: Created: latency-svc-5zv9g
Aug 13 19:29:06.837: INFO: Got endpoints: latency-svc-5zv9g [1.095703478s]
Aug 13 19:29:06.867: INFO: Created: latency-svc-hpk5k
Aug 13 19:29:06.889: INFO: Got endpoints: latency-svc-hpk5k [1.105933245s]
Aug 13 19:29:06.945: INFO: Created: latency-svc-5g7s5
Aug 13 19:29:06.949: INFO: Got endpoints: latency-svc-5g7s5 [1.082133782s]
Aug 13 19:29:07.013: INFO: Created: latency-svc-d5s6t
Aug 13 19:29:07.021: INFO: Got endpoints: latency-svc-d5s6t [1.123138669s]
Aug 13 19:29:07.095: INFO: Created: latency-svc-h2svr
Aug 13 19:29:07.132: INFO: Got endpoints: latency-svc-h2svr [1.190792388s]
Aug 13 19:29:07.187: INFO: Created: latency-svc-5wfws
Aug 13 19:29:07.275: INFO: Got endpoints: latency-svc-5wfws [1.269556408s]
Aug 13 19:29:07.316: INFO: Created: latency-svc-887tc
Aug 13 19:29:07.335: INFO: Got endpoints: latency-svc-887tc [1.267240397s]
Aug 13 19:29:07.362: INFO: Created: latency-svc-fmwgq
Aug 13 19:29:07.430: INFO: Got endpoints: latency-svc-fmwgq [1.266109693s]
Aug 13 19:29:07.461: INFO: Created: latency-svc-7srh7
Aug 13 19:29:07.486: INFO: Got endpoints: latency-svc-7srh7 [1.260186662s]
Aug 13 19:29:07.515: INFO: Created: latency-svc-v6bm7
Aug 13 19:29:07.556: INFO: Got endpoints: latency-svc-v6bm7 [1.242995603s]
Aug 13 19:29:07.574: INFO: Created: latency-svc-fdmvc
Aug 13 19:29:07.595: INFO: Got endpoints: latency-svc-fdmvc [1.22437027s]
Aug 13 19:29:07.619: INFO: Created: latency-svc-l986z
Aug 13 19:29:07.723: INFO: Got endpoints: latency-svc-l986z [1.274566364s]
Aug 13 19:29:07.737: INFO: Created: latency-svc-qmhzc
Aug 13 19:29:07.751: INFO: Got endpoints: latency-svc-qmhzc [1.099093304s]
Aug 13 19:29:07.778: INFO: Created: latency-svc-jmv7q
Aug 13 19:29:07.809: INFO: Got endpoints: latency-svc-jmv7q [1.10662477s]
Aug 13 19:29:07.886: INFO: Created: latency-svc-wdm28
Aug 13 19:29:07.943: INFO: Got endpoints: latency-svc-wdm28 [1.148354762s]
Aug 13 19:29:07.943: INFO: Created: latency-svc-bw4jr
Aug 13 19:29:07.956: INFO: Got endpoints: latency-svc-bw4jr [1.119499803s]
Aug 13 19:29:07.982: INFO: Created: latency-svc-m8nrf
Aug 13 19:29:08.053: INFO: Got endpoints: latency-svc-m8nrf [1.163471017s]
Aug 13 19:29:08.072: INFO: Created: latency-svc-mr22h
Aug 13 19:29:08.101: INFO: Got endpoints: latency-svc-mr22h [1.15215629s]
Aug 13 19:29:08.125: INFO: Created: latency-svc-zvlb4
Aug 13 19:29:08.132: INFO: Got endpoints: latency-svc-zvlb4 [1.110403332s]
Aug 13 19:29:08.196: INFO: Created: latency-svc-cns59
Aug 13 19:29:08.252: INFO: Got endpoints: latency-svc-cns59 [1.120026479s]
Aug 13 19:29:08.294: INFO: Created: latency-svc-s9w2m
Aug 13 19:29:08.376: INFO: Got endpoints: latency-svc-s9w2m [1.101453875s]
Aug 13 19:29:08.378: INFO: Created: latency-svc-8r6db
Aug 13 19:29:08.385: INFO: Got endpoints: latency-svc-8r6db [1.04989378s]
Aug 13 19:29:08.423: INFO: Created: latency-svc-wh2tb
Aug 13 19:29:08.434: INFO: Got endpoints: latency-svc-wh2tb [1.003647111s]
Aug 13 19:29:08.459: INFO: Created: latency-svc-wxd9c
Aug 13 19:29:08.470: INFO: Got endpoints: latency-svc-wxd9c [984.110952ms]
Aug 13 19:29:08.520: INFO: Created: latency-svc-s6n7s
Aug 13 19:29:08.524: INFO: Got endpoints: latency-svc-s6n7s [968.173781ms]
Aug 13 19:29:08.578: INFO: Created: latency-svc-cn98r
Aug 13 19:29:08.591: INFO: Got endpoints: latency-svc-cn98r [996.131588ms]
Aug 13 19:29:08.614: INFO: Created: latency-svc-784kf
Aug 13 19:29:08.681: INFO: Got endpoints: latency-svc-784kf [958.021228ms]
Aug 13 19:29:08.708: INFO: Created: latency-svc-h6hwb
Aug 13 19:29:08.724: INFO: Got endpoints: latency-svc-h6hwb [972.688116ms]
Aug 13 19:29:08.750: INFO: Created: latency-svc-b7xtm
Aug 13 19:29:08.766: INFO: Got endpoints: latency-svc-b7xtm [957.594652ms]
Aug 13 19:29:08.867: INFO: Created: latency-svc-vtdsl
Aug 13 19:29:08.914: INFO: Got endpoints: latency-svc-vtdsl [970.860229ms]
Aug 13 19:29:08.915: INFO: Created: latency-svc-7rjwd
Aug 13 19:29:08.948: INFO: Got endpoints: latency-svc-7rjwd [991.524455ms]
Aug 13 19:29:09.034: INFO: Created: latency-svc-fzx86
Aug 13 19:29:09.038: INFO: Got endpoints: latency-svc-fzx86 [985.931798ms]
Aug 13 19:29:09.084: INFO: Created: latency-svc-k28q4
Aug 13 19:29:09.098: INFO: Got endpoints: latency-svc-k28q4 [997.047205ms]
Aug 13 19:29:09.131: INFO: Created: latency-svc-8jlgz
Aug 13 19:29:09.202: INFO: Got endpoints: latency-svc-8jlgz [1.070738039s]
Aug 13 19:29:09.224: INFO: Created: latency-svc-g742j
Aug 13 19:29:09.259: INFO: Got endpoints: latency-svc-g742j [1.007563327s]
Aug 13 19:29:09.304: INFO: Created: latency-svc-hw2ts
Aug 13 19:29:09.348: INFO: Got endpoints: latency-svc-hw2ts [971.782605ms]
Aug 13 19:29:09.364: INFO: Created: latency-svc-tpk6x
Aug 13 19:29:09.388: INFO: Got endpoints: latency-svc-tpk6x [1.002755512s]
Aug 13 19:29:09.413: INFO: Created: latency-svc-t46nm
Aug 13 19:29:09.431: INFO: Got endpoints: latency-svc-t46nm [996.906309ms]
Aug 13 19:29:09.948: INFO: Created: latency-svc-s7vlk
Aug 13 19:29:10.034: INFO: Got endpoints: latency-svc-s7vlk [1.564264444s]
Aug 13 19:29:10.085: INFO: Created: latency-svc-wbg7b
Aug 13 19:29:10.091: INFO: Got endpoints: latency-svc-wbg7b [1.567123612s]
Aug 13 19:29:10.124: INFO: Created: latency-svc-v5s8j
Aug 13 19:29:10.133: INFO: Got endpoints: latency-svc-v5s8j [1.54202716s]
Aug 13 19:29:10.258: INFO: Created: latency-svc-jp4sn
Aug 13 19:29:10.437: INFO: Got endpoints: latency-svc-jp4sn [1.755367122s]
Aug 13 19:29:10.700: INFO: Created: latency-svc-ssd4d
Aug 13 19:29:10.751: INFO: Got endpoints: latency-svc-ssd4d [2.02687795s]
Aug 13 19:29:10.849: INFO: Created: latency-svc-d2rpv
Aug 13 19:29:10.901: INFO: Created: latency-svc-qjxqg
Aug 13 19:29:10.901: INFO: Got endpoints: latency-svc-d2rpv [2.134650483s]
Aug 13 19:29:10.919: INFO: Got endpoints: latency-svc-qjxqg [2.004686586s]
Aug 13 19:29:10.943: INFO: Created: latency-svc-z5v58
Aug 13 19:29:10.993: INFO: Got endpoints: latency-svc-z5v58 [2.045021581s]
Aug 13 19:29:11.006: INFO: Created: latency-svc-zml5f
Aug 13 19:29:11.021: INFO: Got endpoints: latency-svc-zml5f [1.982020966s]
Aug 13 19:29:11.078: INFO: Created: latency-svc-mxfs9
Aug 13 19:29:11.086: INFO: Got endpoints: latency-svc-mxfs9 [1.987976232s]
Aug 13 19:29:11.154: INFO: Created: latency-svc-q65vh
Aug 13 19:29:11.170: INFO: Got endpoints: latency-svc-q65vh [1.96793697s]
Aug 13 19:29:11.215: INFO: Created: latency-svc-vznkl
Aug 13 19:29:11.298: INFO: Got endpoints: latency-svc-vznkl [2.038826452s]
Aug 13 19:29:11.324: INFO: Created: latency-svc-n4n7r
Aug 13 19:29:11.333: INFO: Got endpoints: latency-svc-n4n7r [1.985226058s]
Aug 13 19:29:11.356: INFO: Created: latency-svc-4rk4n
Aug 13 19:29:11.382: INFO: Got endpoints: latency-svc-4rk4n [1.994692212s]
Aug 13 19:29:11.486: INFO: Created: latency-svc-ms5m5
Aug 13 19:29:11.518: INFO: Got endpoints: latency-svc-ms5m5 [2.087582835s]
Aug 13 19:29:11.561: INFO: Created: latency-svc-vmb4z
Aug 13 19:29:11.604: INFO: Got endpoints: latency-svc-vmb4z [1.569818136s]
Aug 13 19:29:11.623: INFO: Created: latency-svc-mrhsp
Aug 13 19:29:11.648: INFO: Got endpoints: latency-svc-mrhsp [1.5566627s]
Aug 13 19:29:11.687: INFO: Created: latency-svc-qbk8f
Aug 13 19:29:11.696: INFO: Got endpoints: latency-svc-qbk8f [1.562691331s]
Aug 13 19:29:11.753: INFO: Created: latency-svc-7kfts
Aug 13 19:29:11.776: INFO: Got endpoints: latency-svc-7kfts [1.339162634s]
Aug 13 19:29:11.813: INFO: Created: latency-svc-88c5j
Aug 13 19:29:11.832: INFO: Got endpoints: latency-svc-88c5j [135.892046ms]
Aug 13 19:29:11.848: INFO: Created: latency-svc-jrgqr
Aug 13 19:29:11.891: INFO: Got endpoints: latency-svc-jrgqr [1.139906728s]
Aug 13 19:29:11.917: INFO: Created: latency-svc-t2bzt
Aug 13 19:29:11.938: INFO: Got endpoints: latency-svc-t2bzt [1.036950671s]
Aug 13 19:29:11.960: INFO: Created: latency-svc-zjr65
Aug 13 19:29:11.978: INFO: Got endpoints: latency-svc-zjr65 [1.058803331s]
Aug 13 19:29:12.041: INFO: Created: latency-svc-tm95h
Aug 13 19:29:12.046: INFO: Got endpoints: latency-svc-tm95h [1.05324272s]
Aug 13 19:29:12.070: INFO: Created: latency-svc-kdqwb
Aug 13 19:29:12.090: INFO: Got endpoints: latency-svc-kdqwb [1.069114085s]
Aug 13 19:29:12.191: INFO: Created: latency-svc-7cpkm
Aug 13 19:29:12.194: INFO: Got endpoints: latency-svc-7cpkm [1.107527374s]
Aug 13 19:29:12.253: INFO: Created: latency-svc-bq6bq
Aug 13 19:29:12.271: INFO: Got endpoints: latency-svc-bq6bq [1.100730847s]
Aug 13 19:29:12.330: INFO: Created: latency-svc-pf2jj
Aug 13 19:29:12.343: INFO: Got endpoints: latency-svc-pf2jj [1.044977234s]
Aug 13 19:29:12.370: INFO: Created: latency-svc-fpcgt
Aug 13 19:29:12.385: INFO: Got endpoints: latency-svc-fpcgt [1.051317876s]
Aug 13 19:29:12.520: INFO: Created: latency-svc-hjkgr
Aug 13 19:29:12.524: INFO: Got endpoints: latency-svc-hjkgr [1.141942756s]
Aug 13 19:29:12.553: INFO: Created: latency-svc-rjp9p
Aug 13 19:29:12.580: INFO: Got endpoints: latency-svc-rjp9p [1.061810737s]
Aug 13 19:29:12.616: INFO: Created: latency-svc-79rcv
Aug 13 19:29:12.658: INFO: Got endpoints: latency-svc-79rcv [1.053945227s]
Aug 13 19:29:12.676: INFO: Created: latency-svc-sj4ml
Aug 13 19:29:12.711: INFO: Got endpoints: latency-svc-sj4ml [1.062910957s]
Aug 13 19:29:12.734: INFO: Created: latency-svc-ngdpl
Aug 13 19:29:12.753: INFO: Got endpoints: latency-svc-ngdpl [976.995597ms]
Aug 13 19:29:12.807: INFO: Created: latency-svc-4rfps
Aug 13 19:29:12.813: INFO: Got endpoints: latency-svc-4rfps [981.310695ms]
Aug 13 19:29:12.856: INFO: Created: latency-svc-m9g72
Aug 13 19:29:12.874: INFO: Got endpoints: latency-svc-m9g72 [983.259896ms]
Aug 13 19:29:12.981: INFO: Created: latency-svc-sjfcp
Aug 13 19:29:12.988: INFO: Got endpoints: latency-svc-sjfcp [1.050179975s]
Aug 13 19:29:13.027: INFO: Created: latency-svc-rpqgn
Aug 13 19:29:13.065: INFO: Got endpoints: latency-svc-rpqgn [1.086901782s]
Aug 13 19:29:13.130: INFO: Created: latency-svc-qw9kc
Aug 13 19:29:13.151: INFO: Got endpoints: latency-svc-qw9kc [1.105165615s]
Aug 13 19:29:13.181: INFO: Created: latency-svc-7vf6t
Aug 13 19:29:13.194: INFO: Got endpoints: latency-svc-7vf6t [1.104096447s]
Aug 13 19:29:13.298: INFO: Created: latency-svc-fzhjl
Aug 13 19:29:13.327: INFO: Got endpoints: latency-svc-fzhjl [1.133047152s]
Aug 13 19:29:13.328: INFO: Created: latency-svc-22tc7
Aug 13 19:29:13.351: INFO: Got endpoints: latency-svc-22tc7 [1.08013086s]
Aug 13 19:29:13.385: INFO: Created: latency-svc-6nz4m
Aug 13 19:29:13.454: INFO: Got endpoints: latency-svc-6nz4m [1.111009932s]
Aug 13 19:29:13.456: INFO: Created: latency-svc-724hw
Aug 13 19:29:13.501: INFO: Got endpoints: latency-svc-724hw [1.116382579s]
Aug 13 19:29:13.538: INFO: Created: latency-svc-8hvl5
Aug 13 19:29:13.550: INFO: Got endpoints: latency-svc-8hvl5 [1.025211635s]
Aug 13 19:29:13.605: INFO: Created: latency-svc-tz8fq
Aug 13 19:29:13.642: INFO: Got endpoints: latency-svc-tz8fq [1.062005487s]
Aug 13 19:29:13.675: INFO: Created: latency-svc-svptv
Aug 13 19:29:13.689: INFO: Got endpoints: latency-svc-svptv [1.030713806s]
Aug 13 19:29:13.747: INFO: Created: latency-svc-q26t5
Aug 13 19:29:13.755: INFO: Got endpoints: latency-svc-q26t5 [1.043971775s]
Aug 13 19:29:13.778: INFO: Created: latency-svc-2gxhh
Aug 13 19:29:13.791: INFO: Got endpoints: latency-svc-2gxhh [1.037868928s]
Aug 13 19:29:13.815: INFO: Created: latency-svc-rsp8l
Aug 13 19:29:13.897: INFO: Got endpoints: latency-svc-rsp8l [1.083889151s]
Aug 13 19:29:13.911: INFO: Created: latency-svc-zplbx
Aug 13 19:29:13.930: INFO: Got endpoints: latency-svc-zplbx [1.056215187s]
Aug 13 19:29:13.953: INFO: Created: latency-svc-vz9f6
Aug 13 19:29:13.973: INFO: Got endpoints: latency-svc-vz9f6 [984.733669ms]
Aug 13 19:29:13.993: INFO: Created: latency-svc-rdmdt
Aug 13 19:29:14.040: INFO: Got endpoints: latency-svc-rdmdt [975.630191ms]
Aug 13 19:29:14.040: INFO: Latencies: [65.341267ms 123.949837ms 135.892046ms 280.31193ms 355.064756ms 427.214752ms 536.697953ms 611.28494ms 707.881553ms 878.800955ms 950.234074ms 955.104371ms 957.594652ms 958.021228ms 965.904456ms 968.173781ms 968.41427ms 970.860229ms 971.782605ms 971.845102ms 972.688116ms 975.630191ms 976.995597ms 981.310695ms 983.259896ms 984.110952ms 984.733669ms 985.931798ms 989.207044ms 989.898671ms 990.128271ms 991.524455ms 996.131588ms 996.906309ms 997.047205ms 997.970743ms 1.001678632s 1.002740795s 1.002755512s 1.003647111s 1.005967742s 1.007563327s 1.009703189s 1.010862946s 1.01426203s 1.01692108s 1.018692699s 1.019997673s 1.020706605s 1.024462009s 1.025211635s 1.028239344s 1.030713806s 1.030856846s 1.032881743s 1.03361809s 1.036950671s 1.037868928s 1.040892245s 1.042606866s 1.043971775s 1.044027192s 1.044977234s 1.047160009s 1.04989378s 1.050179975s 1.051317876s 1.052685226s 1.05324272s 1.053945227s 1.056215187s 1.058803331s 1.061810737s 1.062005487s 1.062910957s 1.066946017s 1.068112146s 1.069114085s 1.070738039s 1.072392753s 1.0771606s 1.077944773s 1.08013086s 1.080908197s 1.082133782s 1.083889151s 1.086614794s 1.086901782s 1.088051286s 1.090135379s 1.091881647s 1.095703478s 1.098547666s 1.099093304s 1.100730847s 1.101453875s 1.10365766s 1.104096447s 1.105165615s 1.105933245s 1.10662477s 1.107527374s 1.109761865s 1.110403332s 1.111009932s 1.111484069s 1.11190906s 1.116382579s 1.119499803s 1.120026479s 1.123138669s 1.129606811s 1.131345819s 1.132312834s 1.133047152s 1.134347574s 1.137565356s 1.138655514s 1.139906728s 1.141842408s 1.141942756s 1.142344323s 1.146645083s 1.148354762s 1.148682506s 1.15215629s 1.163090758s 1.163471017s 1.164835807s 1.16621843s 1.169734181s 1.172137198s 1.172600375s 1.172966214s 1.177515264s 1.181846543s 1.186646172s 1.188851795s 1.18962392s 1.190792388s 1.194544674s 1.200709925s 1.204413782s 1.218648304s 1.22437027s 1.227005055s 1.229262056s 1.234576609s 1.236953957s 1.242995603s 1.260186662s 1.266109693s 1.267240397s 1.269556408s 1.270007771s 1.274177821s 1.274566364s 1.279141417s 1.294788203s 1.317400736s 1.339162634s 1.378643977s 1.406419027s 1.4188743s 1.459973975s 1.54202716s 1.5566627s 1.562691331s 1.564264444s 1.567123612s 1.569818136s 1.755367122s 1.766574095s 1.96793697s 1.982020966s 1.985226058s 1.987976232s 1.994692212s 2.004686586s 2.021357508s 2.02687795s 2.038826452s 2.045021581s 2.087582835s 2.134650483s 2.17700011s 2.448644481s 2.784331354s 2.805696765s 2.841597546s 2.842693685s 2.930609132s 2.936848492s 2.990748167s 3.00790658s 3.052476847s 3.094180889s 3.095717979s 3.099798958s 3.136738453s]
Aug 13 19:29:14.041: INFO: 50 %ile: 1.10662477s
Aug 13 19:29:14.041: INFO: 90 %ile: 2.02687795s
Aug 13 19:29:14.041: INFO: 99 %ile: 3.099798958s
Aug 13 19:29:14.041: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:29:14.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-1647" for this suite.

• [SLOW TEST:21.806 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":275,"completed":235,"skipped":4002,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:29:14.053: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:91
Aug 13 19:29:14.132: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 13 19:29:14.172: INFO: Waiting for terminating namespaces to be deleted...
Aug 13 19:29:14.175: INFO: 
Logging pods the kubelet thinks is on node kali-worker before test
Aug 13 19:29:14.181: INFO: rally-19e4df10-30wkw9yu-glqpf from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 13 19:29:14.181: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 13 19:29:14.181: INFO: rally-466602a1-db17uwyh-z26cp from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:59:13 +0000 UTC (1 container statuses recorded)
Aug 13 19:29:14.181: INFO: 	Container rally-466602a1-db17uwyh ready: false, restart count 0
Aug 13 19:29:14.181: INFO: rally-466602a1-db17uwyh-6xgdb from c-rally-466602a1-5ui3rnqd started at 2020-08-11 18:51:36 +0000 UTC (1 container statuses recorded)
Aug 13 19:29:14.181: INFO: 	Container rally-466602a1-db17uwyh ready: false, restart count 0
Aug 13 19:29:14.181: INFO: kube-proxy-qwsfx from kube-system started at 2020-07-10 10:28:31 +0000 UTC (1 container statuses recorded)
Aug 13 19:29:14.181: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 13 19:29:14.181: INFO: rally-824618b1-6cukkjuh-lb7rq from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:26 +0000 UTC (1 container statuses recorded)
Aug 13 19:29:14.181: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug 13 19:29:14.181: INFO: svc-latency-rc-vztzm from svc-latency-1647 started at 2020-08-13 19:28:52 +0000 UTC (1 container statuses recorded)
Aug 13 19:29:14.181: INFO: 	Container svc-latency-rc ready: true, restart count 0
Aug 13 19:29:14.181: INFO: kindnet-njbgt from kube-system started at 2020-07-10 10:28:30 +0000 UTC (1 container statuses recorded)
Aug 13 19:29:14.181: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 13 19:29:14.181: INFO: 
Logging pods the kubelet thinks is on node kali-worker2 before test
Aug 13 19:29:14.195: INFO: rally-6c5ea4be-96nyoha6-75976ff4d6-kqnxh from c-rally-6c5ea4be-pyo3sp3v started at 2020-08-11 18:16:03 +0000 UTC (1 container statuses recorded)
Aug 13 19:29:14.195: INFO: 	Container rally-6c5ea4be-96nyoha6 ready: true, restart count 52
Aug 13 19:29:14.195: INFO: kube-proxy-vk6jr from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 13 19:29:14.195: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 13 19:29:14.195: INFO: kindnet-pk4xb from kube-system started at 2020-07-10 10:28:28 +0000 UTC (1 container statuses recorded)
Aug 13 19:29:14.195: INFO: 	Container kindnet-cni ready: true, restart count 1
Aug 13 19:29:14.195: INFO: rally-19e4df10-30wkw9yu-qbmr7 from c-rally-19e4df10-7fs771wk started at 2020-08-01 11:12:55 +0000 UTC (1 container statuses recorded)
Aug 13 19:29:14.195: INFO: 	Container rally-19e4df10-30wkw9yu ready: true, restart count 0
Aug 13 19:29:14.195: INFO: rally-824618b1-6cukkjuh-m84l4 from c-rally-824618b1-4lzsfcdd started at 2020-08-01 10:57:24 +0000 UTC (1 container statuses recorded)
Aug 13 19:29:14.195: INFO: 	Container rally-824618b1-6cukkjuh ready: true, restart count 3
Aug 13 19:29:14.195: INFO: rally-7104017d-j5l4uv4e-0 from c-rally-7104017d-2oejvhl7 started at 2020-08-11 18:51:39 +0000 UTC (1 container statuses recorded)
Aug 13 19:29:14.195: INFO: 	Container rally-7104017d-j5l4uv4e ready: true, restart count 1
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: verifying the node has the label node kali-worker
STEP: verifying the node has the label node kali-worker2
Aug 13 19:29:14.304: INFO: Pod rally-19e4df10-30wkw9yu-glqpf requesting resource cpu=0m on Node kali-worker
Aug 13 19:29:14.304: INFO: Pod rally-19e4df10-30wkw9yu-qbmr7 requesting resource cpu=0m on Node kali-worker2
Aug 13 19:29:14.304: INFO: Pod rally-6c5ea4be-96nyoha6-75976ff4d6-kqnxh requesting resource cpu=0m on Node kali-worker2
Aug 13 19:29:14.304: INFO: Pod rally-7104017d-j5l4uv4e-0 requesting resource cpu=0m on Node kali-worker2
Aug 13 19:29:14.304: INFO: Pod rally-824618b1-6cukkjuh-lb7rq requesting resource cpu=0m on Node kali-worker
Aug 13 19:29:14.304: INFO: Pod rally-824618b1-6cukkjuh-m84l4 requesting resource cpu=0m on Node kali-worker2
Aug 13 19:29:14.304: INFO: Pod kindnet-njbgt requesting resource cpu=100m on Node kali-worker
Aug 13 19:29:14.304: INFO: Pod kindnet-pk4xb requesting resource cpu=100m on Node kali-worker2
Aug 13 19:29:14.304: INFO: Pod kube-proxy-qwsfx requesting resource cpu=0m on Node kali-worker
Aug 13 19:29:14.304: INFO: Pod kube-proxy-vk6jr requesting resource cpu=0m on Node kali-worker2
Aug 13 19:29:14.304: INFO: Pod svc-latency-rc-vztzm requesting resource cpu=0m on Node kali-worker
STEP: Starting Pods to consume most of the cluster CPU.
Aug 13 19:29:14.304: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker2
Aug 13 19:29:14.309: INFO: Creating a pod which consumes cpu=11130m on Node kali-worker
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-168b8358-fdc9-4951-b591-2cdb837c0d5e.162aea97853e3e9e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6034/filler-pod-168b8358-fdc9-4951-b591-2cdb837c0d5e to kali-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-168b8358-fdc9-4951-b591-2cdb837c0d5e.162aea97e33c2fc4], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-168b8358-fdc9-4951-b591-2cdb837c0d5e.162aea987b13c58a], Reason = [Created], Message = [Created container filler-pod-168b8358-fdc9-4951-b591-2cdb837c0d5e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-168b8358-fdc9-4951-b591-2cdb837c0d5e.162aea988cd28fdb], Reason = [Started], Message = [Started container filler-pod-168b8358-fdc9-4951-b591-2cdb837c0d5e]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bb360223-e378-4c93-8d45-bfb8a96090de.162aea978373f428], Reason = [Scheduled], Message = [Successfully assigned sched-pred-6034/filler-pod-bb360223-e378-4c93-8d45-bfb8a96090de to kali-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bb360223-e378-4c93-8d45-bfb8a96090de.162aea97d09e0f29], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.2" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bb360223-e378-4c93-8d45-bfb8a96090de.162aea9850baf353], Reason = [Created], Message = [Created container filler-pod-bb360223-e378-4c93-8d45-bfb8a96090de]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bb360223-e378-4c93-8d45-bfb8a96090de.162aea987832fad4], Reason = [Started], Message = [Started container filler-pod-bb360223-e378-4c93-8d45-bfb8a96090de]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162aea98efbfeb68], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162aea98f51b1519], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node kali-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node kali-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:29:21.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6034" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:82

• [SLOW TEST:7.575 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":275,"completed":236,"skipped":4016,"failed":0}
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:29:21.628: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:29:27.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5752" for this suite.

• [SLOW TEST:6.411 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:188
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":237,"skipped":4021,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:29:28.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Aug 13 19:29:31.024: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Aug 13 19:29:33.147: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943771, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943771, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943771, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943770, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 19:29:35.150: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943771, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943771, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943771, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943770, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-65c6cd5fdf\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 13 19:29:38.286: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:29:38.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:29:39.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-2478" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:13.404 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":275,"completed":238,"skipped":4036,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:29:41.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0813 19:30:23.138392       7 metrics_grabber.go:84] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 13 19:30:23.138: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:30:23.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7779" for this suite.

• [SLOW TEST:41.701 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":275,"completed":239,"skipped":4081,"failed":0}
SS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:30:23.146: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 13 19:30:27.613: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:30:27.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9628" for this suite.
•{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":275,"completed":240,"skipped":4083,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:30:27.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 13 19:30:42.554: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 13 19:30:42.623: INFO: Pod pod-with-poststart-http-hook still exists
Aug 13 19:30:44.624: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 13 19:30:44.628: INFO: Pod pod-with-poststart-http-hook still exists
Aug 13 19:30:46.623: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 13 19:30:46.671: INFO: Pod pod-with-poststart-http-hook still exists
Aug 13 19:30:48.623: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 13 19:30:48.635: INFO: Pod pod-with-poststart-http-hook still exists
Aug 13 19:30:50.624: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 13 19:30:50.640: INFO: Pod pod-with-poststart-http-hook still exists
Aug 13 19:30:52.624: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 13 19:30:52.628: INFO: Pod pod-with-poststart-http-hook still exists
Aug 13 19:30:54.624: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 13 19:30:54.628: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:30:54.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7289" for this suite.

• [SLOW TEST:26.842 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":275,"completed":241,"skipped":4122,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:30:54.640: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:30:54.709: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 13 19:30:54.747: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 13 19:30:59.750: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 13 19:30:59.750: INFO: Creating deployment "test-rolling-update-deployment"
Aug 13 19:30:59.754: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 13 19:30:59.789: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 13 19:31:01.860: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 13 19:31:01.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943859, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943859, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943859, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943859, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 19:31:03.905: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943859, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943859, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943859, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943859, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-59d5cb45c7\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 19:31:05.927: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Aug 13 19:31:06.071: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-1399 /apis/apps/v1/namespaces/deployment-1399/deployments/test-rolling-update-deployment 7724a693-53ad-45a3-8a9f-791409672608 9297272 1 2020-08-13 19:30:59 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  [{e2e.test Update apps/v1 2020-08-13 19:30:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 112 114 111 103 114 101 115 115 68 101 97 100 108 105 110 101 83 101 99 111 110 100 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 118 105 115 105 111 110 72 105 115 116 111 114 121 76 105 109 105 116 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 116 114 97 116 101 103 121 34 58 123 34 102 58 114 111 108 108 105 110 103 85 112 100 97 116 101 34 58 123 34 46 34 58 123 125 44 34 102 58 109 97 120 83 117 114 103 101 34 58 123 125 44 34 102 58 109 97 120 85 110 97 118 97 105 108 97 98 108 101 34 58 123 125 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-13 19:31:04 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 65 118 97 105 108 97 98 108 101 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 80 114 111 103 114 101 115 115 105 110 103 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 85 112 100 97 116 101 84 105 109 101 34 58 123 125 44 34 102 58 109 101 115 115 97 103 101 34 58 123 125 44 34 102 58 114 101 97 115 111 110 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 117 112 100 97 116 101 100 82 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00578c8e8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-08-13 19:30:59 +0000 UTC,LastTransitionTime:2020-08-13 19:30:59 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-59d5cb45c7" has successfully progressed.,LastUpdateTime:2020-08-13 19:31:04 +0000 UTC,LastTransitionTime:2020-08-13 19:30:59 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Aug 13 19:31:06.138: INFO: New ReplicaSet "test-rolling-update-deployment-59d5cb45c7" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7  deployment-1399 /apis/apps/v1/namespaces/deployment-1399/replicasets/test-rolling-update-deployment-59d5cb45c7 cfb6f99c-b1d6-41c5-9910-85dbee23ce3f 9297259 1 2020-08-13 19:30:59 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 7724a693-53ad-45a3-8a9f-791409672608 0xc00578ce37 0xc00578ce38}] []  [{kube-controller-manager Update apps/v1 2020-08-13 19:31:03 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 55 50 52 97 54 57 51 45 53 51 97 100 45 52 53 97 51 45 56 97 57 102 45 55 57 49 52 48 57 54 55 50 54 48 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 97 118 97 105 108 97 98 108 101 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 102 117 108 108 121 76 97 98 101 108 101 100 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 97 100 121 82 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 59d5cb45c7,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [] []  []} {[] [] [{agnhost us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00578cec8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Aug 13 19:31:06.139: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 13 19:31:06.139: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-1399 /apis/apps/v1/namespaces/deployment-1399/replicasets/test-rolling-update-controller ad1b2a24-d64b-4184-9aec-ebc9eafbafdd 9297270 2 2020-08-13 19:30:54 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 7724a693-53ad-45a3-8a9f-791409672608 0xc00578cd1f 0xc00578cd30}] []  [{e2e.test Update apps/v1 2020-08-13 19:30:54 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 46 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 114 101 118 105 115 105 111 110 34 58 123 125 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 115 101 108 101 99 116 111 114 34 58 123 34 102 58 109 97 116 99 104 76 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 116 101 109 112 108 97 116 101 34 58 123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 34 58 123 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 104 116 116 112 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125 125 125],}} {kube-controller-manager Update apps/v1 2020-08-13 19:31:04 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 97 110 110 111 116 97 116 105 111 110 115 34 58 123 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 100 101 115 105 114 101 100 45 114 101 112 108 105 99 97 115 34 58 123 125 44 34 102 58 100 101 112 108 111 121 109 101 110 116 46 107 117 98 101 114 110 101 116 101 115 46 105 111 47 109 97 120 45 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 55 55 50 52 97 54 57 51 45 53 51 97 100 45 52 53 97 51 45 56 97 57 102 45 55 57 49 52 48 57 54 55 50 54 48 56 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 44 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 111 98 115 101 114 118 101 100 71 101 110 101 114 97 116 105 111 110 34 58 123 125 44 34 102 58 114 101 112 108 105 99 97 115 34 58 123 125 125 125],}}]},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00578cdc8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Aug 13 19:31:06.142: INFO: Pod "test-rolling-update-deployment-59d5cb45c7-hwh4p" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-59d5cb45c7-hwh4p test-rolling-update-deployment-59d5cb45c7- deployment-1399 /api/v1/namespaces/deployment-1399/pods/test-rolling-update-deployment-59d5cb45c7-hwh4p 8704a5b9-c3c8-4fc7-bbeb-bc047633757a 9297258 0 2020-08-13 19:30:59 +0000 UTC   map[name:sample-pod pod-template-hash:59d5cb45c7] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-59d5cb45c7 cfb6f99c-b1d6-41c5-9910-85dbee23ce3f 0xc003228497 0xc003228498}] []  [{kube-controller-manager Update v1 2020-08-13 19:30:59 +0000 UTC FieldsV1 FieldsV1{Raw:*[123 34 102 58 109 101 116 97 100 97 116 97 34 58 123 34 102 58 103 101 110 101 114 97 116 101 78 97 109 101 34 58 123 125 44 34 102 58 108 97 98 101 108 115 34 58 123 34 46 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 112 111 100 45 116 101 109 112 108 97 116 101 45 104 97 115 104 34 58 123 125 125 44 34 102 58 111 119 110 101 114 82 101 102 101 114 101 110 99 101 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 117 105 100 92 34 58 92 34 99 102 98 54 102 57 57 99 45 98 49 100 54 45 52 49 99 53 45 57 57 49 48 45 56 53 100 98 101 101 50 51 99 101 51 102 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 97 112 105 86 101 114 115 105 111 110 34 58 123 125 44 34 102 58 98 108 111 99 107 79 119 110 101 114 68 101 108 101 116 105 111 110 34 58 123 125 44 34 102 58 99 111 110 116 114 111 108 108 101 114 34 58 123 125 44 34 102 58 107 105 110 100 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 117 105 100 34 58 123 125 125 125 125 44 34 102 58 115 112 101 99 34 58 123 34 102 58 99 111 110 116 97 105 110 101 114 115 34 58 123 34 107 58 123 92 34 110 97 109 101 92 34 58 92 34 97 103 110 104 111 115 116 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 109 97 103 101 34 58 123 125 44 34 102 58 105 109 97 103 101 80 117 108 108 80 111 108 105 99 121 34 58 123 125 44 34 102 58 110 97 109 101 34 58 123 125 44 34 102 58 114 101 115 111 117 114 99 101 115 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 97 116 104 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 77 101 115 115 97 103 101 80 111 108 105 99 121 34 58 123 125 125 125 44 34 102 58 100 110 115 80 111 108 105 99 121 34 58 123 125 44 34 102 58 101 110 97 98 108 101 83 101 114 118 105 99 101 76 105 110 107 115 34 58 123 125 44 34 102 58 114 101 115 116 97 114 116 80 111 108 105 99 121 34 58 123 125 44 34 102 58 115 99 104 101 100 117 108 101 114 78 97 109 101 34 58 123 125 44 34 102 58 115 101 99 117 114 105 116 121 67 111 110 116 101 120 116 34 58 123 125 44 34 102 58 116 101 114 109 105 110 97 116 105 111 110 71 114 97 99 101 80 101 114 105 111 100 83 101 99 111 110 100 115 34 58 123 125 125 125],}} {kubelet Update v1 2020-08-13 19:31:03 +0000 UTC FieldsV1 &FieldsV1{Raw:*[123 34 102 58 115 116 97 116 117 115 34 58 123 34 102 58 99 111 110 100 105 116 105 111 110 115 34 58 123 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 67 111 110 116 97 105 110 101 114 115 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 73 110 105 116 105 97 108 105 122 101 100 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 44 34 107 58 123 92 34 116 121 112 101 92 34 58 92 34 82 101 97 100 121 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 108 97 115 116 80 114 111 98 101 84 105 109 101 34 58 123 125 44 34 102 58 108 97 115 116 84 114 97 110 115 105 116 105 111 110 84 105 109 101 34 58 123 125 44 34 102 58 115 116 97 116 117 115 34 58 123 125 44 34 102 58 116 121 112 101 34 58 123 125 125 125 44 34 102 58 99 111 110 116 97 105 110 101 114 83 116 97 116 117 115 101 115 34 58 123 125 44 34 102 58 104 111 115 116 73 80 34 58 123 125 44 34 102 58 112 104 97 115 101 34 58 123 125 44 34 102 58 112 111 100 73 80 34 58 123 125 44 34 102 58 112 111 100 73 80 115 34 58 123 34 46 34 58 123 125 44 34 107 58 123 92 34 105 112 92 34 58 92 34 49 48 46 50 52 52 46 50 46 50 54 92 34 125 34 58 123 34 46 34 58 123 125 44 34 102 58 105 112 34 58 123 125 125 125 44 34 102 58 115 116 97 114 116 84 105 109 101 34 58 123 125 125 125],}}]},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8nkxq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8nkxq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8nkxq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:kali-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,FSGroupChangePolicy:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 19:30:59 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 19:31:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 19:31:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-08-13 19:30:59 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.18.0.13,PodIP:10.244.2.26,StartTime:2020-08-13 19:30:59 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-08-13 19:31:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost:2.12,ImageID:us.gcr.io/k8s-artifacts-prod/e2e-test-images/agnhost@sha256:1d7f0d77a6f07fd507f147a38d06a7c8269ebabd4f923bfe46d4fb8b396a520c,ContainerID:containerd://231e1190f0a39bd9644503197f71081096aaf30fd1a26385641fc57e57c24fbd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.26,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:31:06.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1399" for this suite.

• [SLOW TEST:11.509 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":275,"completed":242,"skipped":4150,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:31:06.149: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:31:06.332: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 13 19:31:08.449: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:31:10.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-6772" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":275,"completed":243,"skipped":4169,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:31:10.373: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap that has name configmap-test-emptyKey-0e68bb03-ba89-4838-a903-6873b024cc4d
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:31:10.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3565" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":275,"completed":244,"skipped":4223,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:31:10.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: set up a multi version CRD
Aug 13 19:31:11.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:31:26.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8617" for this suite.

• [SLOW TEST:15.550 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":275,"completed":245,"skipped":4237,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:31:26.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 13 19:31:26.374: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27d09a09-7cca-4a1a-bd7a-a2f42d53f588" in namespace "projected-633" to be "Succeeded or Failed"
Aug 13 19:31:26.399: INFO: Pod "downwardapi-volume-27d09a09-7cca-4a1a-bd7a-a2f42d53f588": Phase="Pending", Reason="", readiness=false. Elapsed: 24.787643ms
Aug 13 19:31:28.416: INFO: Pod "downwardapi-volume-27d09a09-7cca-4a1a-bd7a-a2f42d53f588": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041763385s
Aug 13 19:31:30.420: INFO: Pod "downwardapi-volume-27d09a09-7cca-4a1a-bd7a-a2f42d53f588": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046072547s
Aug 13 19:31:32.443: INFO: Pod "downwardapi-volume-27d09a09-7cca-4a1a-bd7a-a2f42d53f588": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068840504s
Aug 13 19:31:34.606: INFO: Pod "downwardapi-volume-27d09a09-7cca-4a1a-bd7a-a2f42d53f588": Phase="Running", Reason="", readiness=true. Elapsed: 8.232410648s
Aug 13 19:31:36.670: INFO: Pod "downwardapi-volume-27d09a09-7cca-4a1a-bd7a-a2f42d53f588": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.296148917s
STEP: Saw pod success
Aug 13 19:31:36.670: INFO: Pod "downwardapi-volume-27d09a09-7cca-4a1a-bd7a-a2f42d53f588" satisfied condition "Succeeded or Failed"
Aug 13 19:31:36.672: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-27d09a09-7cca-4a1a-bd7a-a2f42d53f588 container client-container: 
STEP: delete the pod
Aug 13 19:31:36.853: INFO: Waiting for pod downwardapi-volume-27d09a09-7cca-4a1a-bd7a-a2f42d53f588 to disappear
Aug 13 19:31:36.870: INFO: Pod downwardapi-volume-27d09a09-7cca-4a1a-bd7a-a2f42d53f588 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:31:36.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-633" for this suite.

• [SLOW TEST:10.614 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":275,"completed":246,"skipped":4257,"failed":0}
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:31:36.879: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward api env vars
Aug 13 19:31:36.997: INFO: Waiting up to 5m0s for pod "downward-api-d5f82521-f3fb-406c-a3ad-880c91828f25" in namespace "downward-api-9255" to be "Succeeded or Failed"
Aug 13 19:31:37.019: INFO: Pod "downward-api-d5f82521-f3fb-406c-a3ad-880c91828f25": Phase="Pending", Reason="", readiness=false. Elapsed: 22.121044ms
Aug 13 19:31:39.445: INFO: Pod "downward-api-d5f82521-f3fb-406c-a3ad-880c91828f25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.448187568s
Aug 13 19:31:41.448: INFO: Pod "downward-api-d5f82521-f3fb-406c-a3ad-880c91828f25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.451178382s
STEP: Saw pod success
Aug 13 19:31:41.448: INFO: Pod "downward-api-d5f82521-f3fb-406c-a3ad-880c91828f25" satisfied condition "Succeeded or Failed"
Aug 13 19:31:41.451: INFO: Trying to get logs from node kali-worker pod downward-api-d5f82521-f3fb-406c-a3ad-880c91828f25 container dapi-container: 
STEP: delete the pod
Aug 13 19:31:41.563: INFO: Waiting for pod downward-api-d5f82521-f3fb-406c-a3ad-880c91828f25 to disappear
Aug 13 19:31:41.568: INFO: Pod downward-api-d5f82521-f3fb-406c-a3ad-880c91828f25 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:31:41.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9255" for this suite.
•{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":275,"completed":247,"skipped":4263,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:31:41.574: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:31:41.685: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:31:50.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8897" for this suite.

• [SLOW TEST:9.136 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:48
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":275,"completed":248,"skipped":4267,"failed":0}
SSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:31:50.711: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Performing setup for networking test in namespace pod-network-test-8167
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 13 19:31:51.406: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Aug 13 19:31:51.692: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 13 19:31:53.720: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 13 19:31:55.744: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Aug 13 19:31:57.701: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:31:59.695: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:32:01.869: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:32:03.720: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:32:05.696: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:32:07.696: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:32:09.700: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:32:11.696: INFO: The status of Pod netserver-0 is Running (Ready = false)
Aug 13 19:32:13.697: INFO: The status of Pod netserver-0 is Running (Ready = true)
Aug 13 19:32:13.702: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Aug 13 19:32:19.726: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.32:8080/dial?request=hostname&protocol=http&host=10.244.2.31&port=8080&tries=1'] Namespace:pod-network-test-8167 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 19:32:19.726: INFO: >>> kubeConfig: /root/.kube/config
I0813 19:32:19.763082       7 log.go:172] (0xc005553c30) (0xc0014d5360) Create stream
I0813 19:32:19.763118       7 log.go:172] (0xc005553c30) (0xc0014d5360) Stream added, broadcasting: 1
I0813 19:32:19.767552       7 log.go:172] (0xc005553c30) Reply frame received for 1
I0813 19:32:19.767576       7 log.go:172] (0xc005553c30) (0xc0014d5540) Create stream
I0813 19:32:19.767585       7 log.go:172] (0xc005553c30) (0xc0014d5540) Stream added, broadcasting: 3
I0813 19:32:19.768557       7 log.go:172] (0xc005553c30) Reply frame received for 3
I0813 19:32:19.768585       7 log.go:172] (0xc005553c30) (0xc00126c820) Create stream
I0813 19:32:19.768596       7 log.go:172] (0xc005553c30) (0xc00126c820) Stream added, broadcasting: 5
I0813 19:32:19.769680       7 log.go:172] (0xc005553c30) Reply frame received for 5
I0813 19:32:19.848879       7 log.go:172] (0xc005553c30) Data frame received for 3
I0813 19:32:19.848938       7 log.go:172] (0xc0014d5540) (3) Data frame handling
I0813 19:32:19.848981       7 log.go:172] (0xc0014d5540) (3) Data frame sent
I0813 19:32:19.849166       7 log.go:172] (0xc005553c30) Data frame received for 3
I0813 19:32:19.849193       7 log.go:172] (0xc0014d5540) (3) Data frame handling
I0813 19:32:19.849283       7 log.go:172] (0xc005553c30) Data frame received for 5
I0813 19:32:19.849342       7 log.go:172] (0xc00126c820) (5) Data frame handling
I0813 19:32:19.851156       7 log.go:172] (0xc005553c30) Data frame received for 1
I0813 19:32:19.851178       7 log.go:172] (0xc0014d5360) (1) Data frame handling
I0813 19:32:19.851215       7 log.go:172] (0xc0014d5360) (1) Data frame sent
I0813 19:32:19.851370       7 log.go:172] (0xc005553c30) (0xc0014d5360) Stream removed, broadcasting: 1
I0813 19:32:19.851508       7 log.go:172] (0xc005553c30) (0xc0014d5360) Stream removed, broadcasting: 1
I0813 19:32:19.851531       7 log.go:172] (0xc005553c30) (0xc0014d5540) Stream removed, broadcasting: 3
I0813 19:32:19.851554       7 log.go:172] (0xc005553c30) Go away received
I0813 19:32:19.851612       7 log.go:172] (0xc005553c30) (0xc00126c820) Stream removed, broadcasting: 5
Aug 13 19:32:19.851: INFO: Waiting for responses: map[]
Aug 13 19:32:19.855: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.32:8080/dial?request=hostname&protocol=http&host=10.244.1.126&port=8080&tries=1'] Namespace:pod-network-test-8167 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 13 19:32:19.855: INFO: >>> kubeConfig: /root/.kube/config
I0813 19:32:19.889603       7 log.go:172] (0xc005750420) (0xc000b50f00) Create stream
I0813 19:32:19.889634       7 log.go:172] (0xc005750420) (0xc000b50f00) Stream added, broadcasting: 1
I0813 19:32:19.891395       7 log.go:172] (0xc005750420) Reply frame received for 1
I0813 19:32:19.891430       7 log.go:172] (0xc005750420) (0xc00126cb40) Create stream
I0813 19:32:19.891447       7 log.go:172] (0xc005750420) (0xc00126cb40) Stream added, broadcasting: 3
I0813 19:32:19.892347       7 log.go:172] (0xc005750420) Reply frame received for 3
I0813 19:32:19.892399       7 log.go:172] (0xc005750420) (0xc000b51b80) Create stream
I0813 19:32:19.892419       7 log.go:172] (0xc005750420) (0xc000b51b80) Stream added, broadcasting: 5
I0813 19:32:19.893532       7 log.go:172] (0xc005750420) Reply frame received for 5
I0813 19:32:19.965449       7 log.go:172] (0xc005750420) Data frame received for 3
I0813 19:32:19.965488       7 log.go:172] (0xc00126cb40) (3) Data frame handling
I0813 19:32:19.965509       7 log.go:172] (0xc00126cb40) (3) Data frame sent
I0813 19:32:19.966237       7 log.go:172] (0xc005750420) Data frame received for 3
I0813 19:32:19.966265       7 log.go:172] (0xc00126cb40) (3) Data frame handling
I0813 19:32:19.966366       7 log.go:172] (0xc005750420) Data frame received for 5
I0813 19:32:19.966396       7 log.go:172] (0xc000b51b80) (5) Data frame handling
I0813 19:32:19.967860       7 log.go:172] (0xc005750420) Data frame received for 1
I0813 19:32:19.967881       7 log.go:172] (0xc000b50f00) (1) Data frame handling
I0813 19:32:19.967895       7 log.go:172] (0xc000b50f00) (1) Data frame sent
I0813 19:32:19.967907       7 log.go:172] (0xc005750420) (0xc000b50f00) Stream removed, broadcasting: 1
I0813 19:32:19.967918       7 log.go:172] (0xc005750420) Go away received
I0813 19:32:19.968108       7 log.go:172] (0xc005750420) (0xc000b50f00) Stream removed, broadcasting: 1
I0813 19:32:19.968141       7 log.go:172] (0xc005750420) (0xc00126cb40) Stream removed, broadcasting: 3
I0813 19:32:19.968161       7 log.go:172] (0xc005750420) (0xc000b51b80) Stream removed, broadcasting: 5
Aug 13 19:32:19.968: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:32:19.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8167" for this suite.

• [SLOW TEST:29.264 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":275,"completed":249,"skipped":4273,"failed":0}
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:32:19.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 13 19:32:20.680: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 13 19:32:22.689: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943940, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943940, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943940, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943940, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 19:32:24.694: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943940, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943940, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943940, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943940, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 13 19:32:28.103: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:32:28.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3747" for this suite.
STEP: Destroying namespace "webhook-3747-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:8.975 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":275,"completed":250,"skipped":4273,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:32:28.951: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-492e64cd-5159-4378-ba75-f9c4d00098c9
STEP: Creating a pod to test consume configMaps
Aug 13 19:32:30.174: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e610cc27-8237-4afa-ba95-848202d86889" in namespace "projected-6790" to be "Succeeded or Failed"
Aug 13 19:32:30.231: INFO: Pod "pod-projected-configmaps-e610cc27-8237-4afa-ba95-848202d86889": Phase="Pending", Reason="", readiness=false. Elapsed: 57.098578ms
Aug 13 19:32:32.282: INFO: Pod "pod-projected-configmaps-e610cc27-8237-4afa-ba95-848202d86889": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107594996s
Aug 13 19:32:34.311: INFO: Pod "pod-projected-configmaps-e610cc27-8237-4afa-ba95-848202d86889": Phase="Pending", Reason="", readiness=false. Elapsed: 4.136390839s
Aug 13 19:32:36.315: INFO: Pod "pod-projected-configmaps-e610cc27-8237-4afa-ba95-848202d86889": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.140282174s
STEP: Saw pod success
Aug 13 19:32:36.315: INFO: Pod "pod-projected-configmaps-e610cc27-8237-4afa-ba95-848202d86889" satisfied condition "Succeeded or Failed"
Aug 13 19:32:36.317: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-e610cc27-8237-4afa-ba95-848202d86889 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 13 19:32:36.451: INFO: Waiting for pod pod-projected-configmaps-e610cc27-8237-4afa-ba95-848202d86889 to disappear
Aug 13 19:32:36.520: INFO: Pod pod-projected-configmaps-e610cc27-8237-4afa-ba95-848202d86889 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:32:36.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6790" for this suite.

• [SLOW TEST:7.588 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:36
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":275,"completed":251,"skipped":4285,"failed":0}
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:32:36.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:42
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test downward API volume plugin
Aug 13 19:32:36.793: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb185052-7b35-4065-90c5-0d2024f0e041" in namespace "downward-api-1361" to be "Succeeded or Failed"
Aug 13 19:32:36.810: INFO: Pod "downwardapi-volume-cb185052-7b35-4065-90c5-0d2024f0e041": Phase="Pending", Reason="", readiness=false. Elapsed: 16.938671ms
Aug 13 19:32:38.853: INFO: Pod "downwardapi-volume-cb185052-7b35-4065-90c5-0d2024f0e041": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060346078s
Aug 13 19:32:40.857: INFO: Pod "downwardapi-volume-cb185052-7b35-4065-90c5-0d2024f0e041": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064341961s
Aug 13 19:32:42.861: INFO: Pod "downwardapi-volume-cb185052-7b35-4065-90c5-0d2024f0e041": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.068737024s
STEP: Saw pod success
Aug 13 19:32:42.861: INFO: Pod "downwardapi-volume-cb185052-7b35-4065-90c5-0d2024f0e041" satisfied condition "Succeeded or Failed"
Aug 13 19:32:42.864: INFO: Trying to get logs from node kali-worker pod downwardapi-volume-cb185052-7b35-4065-90c5-0d2024f0e041 container client-container: 
STEP: delete the pod
Aug 13 19:32:42.886: INFO: Waiting for pod downwardapi-volume-cb185052-7b35-4065-90c5-0d2024f0e041 to disappear
Aug 13 19:32:43.168: INFO: Pod downwardapi-volume-cb185052-7b35-4065-90c5-0d2024f0e041 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:32:43.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1361" for this suite.

• [SLOW TEST:6.724 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:37
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":275,"completed":252,"skipped":4286,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:32:43.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Starting the proxy
Aug 13 19:32:43.388: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix994217605/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:32:43.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6485" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":275,"completed":253,"skipped":4301,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:32:43.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 13 19:32:43.866: INFO: Waiting up to 5m0s for pod "pod-d67c0b58-2fe4-40a0-a257-405768d458f8" in namespace "emptydir-12" to be "Succeeded or Failed"
Aug 13 19:32:43.938: INFO: Pod "pod-d67c0b58-2fe4-40a0-a257-405768d458f8": Phase="Pending", Reason="", readiness=false. Elapsed: 71.989222ms
Aug 13 19:32:45.942: INFO: Pod "pod-d67c0b58-2fe4-40a0-a257-405768d458f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076489798s
Aug 13 19:32:47.946: INFO: Pod "pod-d67c0b58-2fe4-40a0-a257-405768d458f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.08048191s
STEP: Saw pod success
Aug 13 19:32:47.946: INFO: Pod "pod-d67c0b58-2fe4-40a0-a257-405768d458f8" satisfied condition "Succeeded or Failed"
Aug 13 19:32:47.949: INFO: Trying to get logs from node kali-worker pod pod-d67c0b58-2fe4-40a0-a257-405768d458f8 container test-container: 
STEP: delete the pod
Aug 13 19:32:48.015: INFO: Waiting for pod pod-d67c0b58-2fe4-40a0-a257-405768d458f8 to disappear
Aug 13 19:32:48.041: INFO: Pod pod-d67c0b58-2fe4-40a0-a257-405768d458f8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:32:48.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-12" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":254,"skipped":4312,"failed":0}
SS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:32:48.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:32:48.146: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-88434152-465a-44f7-bb64-64facd6cff2a" in namespace "security-context-test-5529" to be "Succeeded or Failed"
Aug 13 19:32:48.163: INFO: Pod "busybox-readonly-false-88434152-465a-44f7-bb64-64facd6cff2a": Phase="Pending", Reason="", readiness=false. Elapsed: 16.891689ms
Aug 13 19:32:50.211: INFO: Pod "busybox-readonly-false-88434152-465a-44f7-bb64-64facd6cff2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065338535s
Aug 13 19:32:52.215: INFO: Pod "busybox-readonly-false-88434152-465a-44f7-bb64-64facd6cff2a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069462378s
Aug 13 19:32:54.219: INFO: Pod "busybox-readonly-false-88434152-465a-44f7-bb64-64facd6cff2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.072998986s
Aug 13 19:32:54.219: INFO: Pod "busybox-readonly-false-88434152-465a-44f7-bb64-64facd6cff2a" satisfied condition "Succeeded or Failed"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:32:54.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5529" for this suite.

• [SLOW TEST:6.175 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":275,"completed":255,"skipped":4314,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:32:54.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 13 19:32:55.825: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 13 19:32:58.372: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943975, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943975, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943975, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943975, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 13 19:33:01.456: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:33:01.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9416" for this suite.
STEP: Destroying namespace "webhook-9416-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:7.418 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":275,"completed":256,"skipped":4368,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:33:01.647: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 13 19:33:01.740: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the sample API server.
Aug 13 19:33:02.939: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 13 19:33:05.600: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943982, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943982, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943983, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943982, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 19:33:07.775: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943982, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943982, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943983, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943982, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 19:33:09.954: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943982, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943982, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943983, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732943982, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7996d54f97\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 19:33:12.742: INFO: Waited 1.133203433s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:33:13.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-1703" for this suite.

• [SLOW TEST:11.871 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":275,"completed":257,"skipped":4383,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:33:13.518: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod busybox-010b0d67-6a37-45e8-ae29-505440c1db86 in namespace container-probe-2342
Aug 13 19:33:17.894: INFO: Started pod busybox-010b0d67-6a37-45e8-ae29-505440c1db86 in namespace container-probe-2342
STEP: checking the pod's current state and verifying that restartCount is present
Aug 13 19:33:17.897: INFO: Initial restart count of pod busybox-010b0d67-6a37-45e8-ae29-505440c1db86 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:37:18.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2342" for this suite.

• [SLOW TEST:244.913 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":275,"completed":258,"skipped":4390,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:37:18.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:38
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:82
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:37:18.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7407" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":275,"completed":259,"skipped":4427,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:37:18.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:38
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating pod pod-subpath-test-configmap-rjs5
STEP: Creating a pod to test atomic-volume-subpath
Aug 13 19:37:19.866: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-rjs5" in namespace "subpath-8796" to be "Succeeded or Failed"
Aug 13 19:37:19.932: INFO: Pod "pod-subpath-test-configmap-rjs5": Phase="Pending", Reason="", readiness=false. Elapsed: 66.109948ms
Aug 13 19:37:21.949: INFO: Pod "pod-subpath-test-configmap-rjs5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083234672s
Aug 13 19:37:23.993: INFO: Pod "pod-subpath-test-configmap-rjs5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126952399s
Aug 13 19:37:25.997: INFO: Pod "pod-subpath-test-configmap-rjs5": Phase="Running", Reason="", readiness=true. Elapsed: 6.131605954s
Aug 13 19:37:28.040: INFO: Pod "pod-subpath-test-configmap-rjs5": Phase="Running", Reason="", readiness=true. Elapsed: 8.173732995s
Aug 13 19:37:30.043: INFO: Pod "pod-subpath-test-configmap-rjs5": Phase="Running", Reason="", readiness=true. Elapsed: 10.177591353s
Aug 13 19:37:32.065: INFO: Pod "pod-subpath-test-configmap-rjs5": Phase="Running", Reason="", readiness=true. Elapsed: 12.198742101s
Aug 13 19:37:34.069: INFO: Pod "pod-subpath-test-configmap-rjs5": Phase="Running", Reason="", readiness=true. Elapsed: 14.203159804s
Aug 13 19:37:36.073: INFO: Pod "pod-subpath-test-configmap-rjs5": Phase="Running", Reason="", readiness=true. Elapsed: 16.207668113s
Aug 13 19:37:38.077: INFO: Pod "pod-subpath-test-configmap-rjs5": Phase="Running", Reason="", readiness=true. Elapsed: 18.211695705s
Aug 13 19:37:40.160: INFO: Pod "pod-subpath-test-configmap-rjs5": Phase="Running", Reason="", readiness=true. Elapsed: 20.294284185s
Aug 13 19:37:42.687: INFO: Pod "pod-subpath-test-configmap-rjs5": Phase="Running", Reason="", readiness=true. Elapsed: 22.821045843s
Aug 13 19:37:44.842: INFO: Pod "pod-subpath-test-configmap-rjs5": Phase="Running", Reason="", readiness=true. Elapsed: 24.976163896s
Aug 13 19:37:46.847: INFO: Pod "pod-subpath-test-configmap-rjs5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.981076474s
STEP: Saw pod success
Aug 13 19:37:46.847: INFO: Pod "pod-subpath-test-configmap-rjs5" satisfied condition "Succeeded or Failed"
Aug 13 19:37:46.851: INFO: Trying to get logs from node kali-worker pod pod-subpath-test-configmap-rjs5 container test-container-subpath-configmap-rjs5: 
STEP: delete the pod
Aug 13 19:37:46.884: INFO: Waiting for pod pod-subpath-test-configmap-rjs5 to disappear
Aug 13 19:37:46.929: INFO: Pod pod-subpath-test-configmap-rjs5 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-rjs5
Aug 13 19:37:46.929: INFO: Deleting pod "pod-subpath-test-configmap-rjs5" in namespace "subpath-8796"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:37:46.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8796" for this suite.

• [SLOW TEST:28.085 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:34
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":275,"completed":260,"skipped":4439,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:37:46.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name secret-emptykey-test-2dafcb28-7320-4749-a7f6-333a79ac5ad4
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:37:47.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8291" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":275,"completed":261,"skipped":4444,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:37:47.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 13 19:37:47.149: INFO: Waiting up to 5m0s for pod "pod-19970cd8-b7ce-48a3-8d9f-de5c56651a55" in namespace "emptydir-7956" to be "Succeeded or Failed"
Aug 13 19:37:47.154: INFO: Pod "pod-19970cd8-b7ce-48a3-8d9f-de5c56651a55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.731148ms
Aug 13 19:37:49.158: INFO: Pod "pod-19970cd8-b7ce-48a3-8d9f-de5c56651a55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008992922s
Aug 13 19:37:51.162: INFO: Pod "pod-19970cd8-b7ce-48a3-8d9f-de5c56651a55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013319502s
STEP: Saw pod success
Aug 13 19:37:51.162: INFO: Pod "pod-19970cd8-b7ce-48a3-8d9f-de5c56651a55" satisfied condition "Succeeded or Failed"
Aug 13 19:37:51.166: INFO: Trying to get logs from node kali-worker pod pod-19970cd8-b7ce-48a3-8d9f-de5c56651a55 container test-container: 
STEP: delete the pod
Aug 13 19:37:51.354: INFO: Waiting for pod pod-19970cd8-b7ce-48a3-8d9f-de5c56651a55 to disappear
Aug 13 19:37:51.376: INFO: Pod pod-19970cd8-b7ce-48a3-8d9f-de5c56651a55 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:37:51.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7956" for this suite.
•{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":262,"skipped":4471,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:37:51.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Aug 13 19:37:52.485: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Aug 13 19:37:55.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732944272, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732944272, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732944272, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732944272, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 19:37:57.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732944272, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732944272, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732944272, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732944272, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 13 19:37:59.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732944272, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732944272, loc:(*time.Location)(0x7b220e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63732944272, loc:(*time.Location)(0x7b220e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63732944272, loc:(*time.Location)(0x7b220e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-779fdc84d9\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Aug 13 19:38:02.071: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Aug 13 19:38:02.125: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:38:02.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9256" for this suite.
STEP: Destroying namespace "webhook-9256-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102

• [SLOW TEST:10.869 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":275,"completed":263,"skipped":4516,"failed":0}
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:38:02.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name configmap-test-volume-map-71338275-20c5-4245-a83f-22b0a9aa4d99
STEP: Creating a pod to test consume configMaps
Aug 13 19:38:02.374: INFO: Waiting up to 5m0s for pod "pod-configmaps-03c72380-3457-471c-90fb-adccfd7b7801" in namespace "configmap-5618" to be "Succeeded or Failed"
Aug 13 19:38:02.396: INFO: Pod "pod-configmaps-03c72380-3457-471c-90fb-adccfd7b7801": Phase="Pending", Reason="", readiness=false. Elapsed: 21.740297ms
Aug 13 19:38:04.441: INFO: Pod "pod-configmaps-03c72380-3457-471c-90fb-adccfd7b7801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067339716s
Aug 13 19:38:06.446: INFO: Pod "pod-configmaps-03c72380-3457-471c-90fb-adccfd7b7801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071885359s
STEP: Saw pod success
Aug 13 19:38:06.446: INFO: Pod "pod-configmaps-03c72380-3457-471c-90fb-adccfd7b7801" satisfied condition "Succeeded or Failed"
Aug 13 19:38:06.450: INFO: Trying to get logs from node kali-worker pod pod-configmaps-03c72380-3457-471c-90fb-adccfd7b7801 container configmap-volume-test: 
STEP: delete the pod
Aug 13 19:38:06.647: INFO: Waiting for pod pod-configmaps-03c72380-3457-471c-90fb-adccfd7b7801 to disappear
Aug 13 19:38:06.658: INFO: Pod pod-configmaps-03c72380-3457-471c-90fb-adccfd7b7801 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:38:06.658: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5618" for this suite.
•{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":264,"skipped":4518,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:38:06.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: starting the proxy server
Aug 13 19:38:06.742: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:38:06.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9551" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":275,"completed":265,"skipped":4566,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:38:06.837: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 13 19:38:06.894: INFO: Waiting up to 5m0s for pod "pod-5b32721f-3379-4401-8d40-5cbfda127f87" in namespace "emptydir-3047" to be "Succeeded or Failed"
Aug 13 19:38:06.980: INFO: Pod "pod-5b32721f-3379-4401-8d40-5cbfda127f87": Phase="Pending", Reason="", readiness=false. Elapsed: 86.041597ms
Aug 13 19:38:09.183: INFO: Pod "pod-5b32721f-3379-4401-8d40-5cbfda127f87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.289435865s
Aug 13 19:38:11.292: INFO: Pod "pod-5b32721f-3379-4401-8d40-5cbfda127f87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397885491s
Aug 13 19:38:13.417: INFO: Pod "pod-5b32721f-3379-4401-8d40-5cbfda127f87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.523455682s
STEP: Saw pod success
Aug 13 19:38:13.417: INFO: Pod "pod-5b32721f-3379-4401-8d40-5cbfda127f87" satisfied condition "Succeeded or Failed"
Aug 13 19:38:13.420: INFO: Trying to get logs from node kali-worker pod pod-5b32721f-3379-4401-8d40-5cbfda127f87 container test-container: 
STEP: delete the pod
Aug 13 19:38:13.695: INFO: Waiting for pod pod-5b32721f-3379-4401-8d40-5cbfda127f87 to disappear
Aug 13 19:38:13.742: INFO: Pod pod-5b32721f-3379-4401-8d40-5cbfda127f87 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:38:13.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3047" for this suite.

• [SLOW TEST:6.983 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":266,"skipped":4587,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:38:13.821: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 13 19:38:13.948: INFO: Waiting up to 5m0s for pod "pod-7802ff66-16ee-4910-ba2d-58e4a0a39ea7" in namespace "emptydir-6582" to be "Succeeded or Failed"
Aug 13 19:38:13.951: INFO: Pod "pod-7802ff66-16ee-4910-ba2d-58e4a0a39ea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.971223ms
Aug 13 19:38:15.956: INFO: Pod "pod-7802ff66-16ee-4910-ba2d-58e4a0a39ea7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008277793s
Aug 13 19:38:17.960: INFO: Pod "pod-7802ff66-16ee-4910-ba2d-58e4a0a39ea7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.012436601s
Aug 13 19:38:19.965: INFO: Pod "pod-7802ff66-16ee-4910-ba2d-58e4a0a39ea7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016852223s
STEP: Saw pod success
Aug 13 19:38:19.965: INFO: Pod "pod-7802ff66-16ee-4910-ba2d-58e4a0a39ea7" satisfied condition "Succeeded or Failed"
Aug 13 19:38:19.967: INFO: Trying to get logs from node kali-worker pod pod-7802ff66-16ee-4910-ba2d-58e4a0a39ea7 container test-container: 
STEP: delete the pod
Aug 13 19:38:20.045: INFO: Waiting for pod pod-7802ff66-16ee-4910-ba2d-58e4a0a39ea7 to disappear
Aug 13 19:38:20.190: INFO: Pod pod-7802ff66-16ee-4910-ba2d-58e4a0a39ea7 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:38:20.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6582" for this suite.

• [SLOW TEST:6.408 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":267,"skipped":4592,"failed":0}
SSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:38:20.229: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:38:20.440: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-9941eba5-1760-4970-a7b6-f14666536d7d" in namespace "security-context-test-6630" to be "Succeeded or Failed"
Aug 13 19:38:20.443: INFO: Pod "busybox-privileged-false-9941eba5-1760-4970-a7b6-f14666536d7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.451453ms
Aug 13 19:38:22.897: INFO: Pod "busybox-privileged-false-9941eba5-1760-4970-a7b6-f14666536d7d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.456217082s
Aug 13 19:38:24.900: INFO: Pod "busybox-privileged-false-9941eba5-1760-4970-a7b6-f14666536d7d": Phase="Running", Reason="", readiness=true. Elapsed: 4.459666908s
Aug 13 19:38:26.904: INFO: Pod "busybox-privileged-false-9941eba5-1760-4970-a7b6-f14666536d7d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.463718858s
Aug 13 19:38:26.904: INFO: Pod "busybox-privileged-false-9941eba5-1760-4970-a7b6-f14666536d7d" satisfied condition "Succeeded or Failed"
Aug 13 19:38:26.911: INFO: Got logs for pod "busybox-privileged-false-9941eba5-1760-4970-a7b6-f14666536d7d": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:38:26.911: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6630" for this suite.

• [SLOW TEST:6.688 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":268,"skipped":4596,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:38:26.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:54
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
Aug 13 19:38:27.042: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Pending, waiting for it to be Running (with Ready = true)
Aug 13 19:38:29.046: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Pending, waiting for it to be Running (with Ready = true)
Aug 13 19:38:31.112: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Pending, waiting for it to be Running (with Ready = true)
Aug 13 19:38:33.046: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Running (Ready = false)
Aug 13 19:38:35.064: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Running (Ready = false)
Aug 13 19:38:37.046: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Running (Ready = false)
Aug 13 19:38:39.046: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Running (Ready = false)
Aug 13 19:38:41.046: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Running (Ready = false)
Aug 13 19:38:43.047: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Running (Ready = false)
Aug 13 19:38:45.047: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Running (Ready = false)
Aug 13 19:38:47.047: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Running (Ready = false)
Aug 13 19:38:49.045: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Running (Ready = false)
Aug 13 19:38:51.047: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Running (Ready = false)
Aug 13 19:38:53.054: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Running (Ready = false)
Aug 13 19:38:55.046: INFO: The status of Pod test-webserver-94703b6c-868f-4e23-a5c0-f5be8637923b is Running (Ready = true)
Aug 13 19:38:55.049: INFO: Container started at 2020-08-13 19:38:30 +0000 UTC, pod became ready at 2020-08-13 19:38:53 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:38:55.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6051" for this suite.

• [SLOW TEST:28.139 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":275,"completed":269,"skipped":4612,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:38:55.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:219
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: validating cluster-info
Aug 13 19:38:55.421: INFO: Running '/usr/local/bin/kubectl --server=https://172.30.12.66:35995 --kubeconfig=/root/.kube/config cluster-info'
Aug 13 19:38:59.573: INFO: stderr: ""
Aug 13 19:38:59.573: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35995/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:38:59.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8983" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":275,"completed":270,"skipped":4634,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:38:59.602: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 13 19:38:59.692: INFO: Waiting up to 5m0s for pod "pod-2bfeb81b-959a-4053-af75-dc8f36245df8" in namespace "emptydir-4957" to be "Succeeded or Failed"
Aug 13 19:38:59.800: INFO: Pod "pod-2bfeb81b-959a-4053-af75-dc8f36245df8": Phase="Pending", Reason="", readiness=false. Elapsed: 108.282904ms
Aug 13 19:39:02.290: INFO: Pod "pod-2bfeb81b-959a-4053-af75-dc8f36245df8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.598592744s
Aug 13 19:39:04.526: INFO: Pod "pod-2bfeb81b-959a-4053-af75-dc8f36245df8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.834385447s
Aug 13 19:39:06.530: INFO: Pod "pod-2bfeb81b-959a-4053-af75-dc8f36245df8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.83784922s
STEP: Saw pod success
Aug 13 19:39:06.530: INFO: Pod "pod-2bfeb81b-959a-4053-af75-dc8f36245df8" satisfied condition "Succeeded or Failed"
Aug 13 19:39:06.533: INFO: Trying to get logs from node kali-worker pod pod-2bfeb81b-959a-4053-af75-dc8f36245df8 container test-container: 
STEP: delete the pod
Aug 13 19:39:06.761: INFO: Waiting for pod pod-2bfeb81b-959a-4053-af75-dc8f36245df8 to disappear
Aug 13 19:39:06.773: INFO: Pod pod-2bfeb81b-959a-4053-af75-dc8f36245df8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:39:06.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4957" for this suite.

• [SLOW TEST:7.179 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:42
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":271,"skipped":4658,"failed":0}
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:39:06.781: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating configMap with name projected-configmap-test-volume-432dd5e1-e510-4a1b-94e8-aabb7d63cbc3
STEP: Creating a pod to test consume configMaps
Aug 13 19:39:06.991: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c4c663a4-364c-4faa-90a0-cc66ed6eef13" in namespace "projected-9220" to be "Succeeded or Failed"
Aug 13 19:39:07.013: INFO: Pod "pod-projected-configmaps-c4c663a4-364c-4faa-90a0-cc66ed6eef13": Phase="Pending", Reason="", readiness=false. Elapsed: 21.902231ms
Aug 13 19:39:09.017: INFO: Pod "pod-projected-configmaps-c4c663a4-364c-4faa-90a0-cc66ed6eef13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025951045s
Aug 13 19:39:11.021: INFO: Pod "pod-projected-configmaps-c4c663a4-364c-4faa-90a0-cc66ed6eef13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030391864s
STEP: Saw pod success
Aug 13 19:39:11.021: INFO: Pod "pod-projected-configmaps-c4c663a4-364c-4faa-90a0-cc66ed6eef13" satisfied condition "Succeeded or Failed"
Aug 13 19:39:11.024: INFO: Trying to get logs from node kali-worker pod pod-projected-configmaps-c4c663a4-364c-4faa-90a0-cc66ed6eef13 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 13 19:39:11.114: INFO: Waiting for pod pod-projected-configmaps-c4c663a4-364c-4faa-90a0-cc66ed6eef13 to disappear
Aug 13 19:39:11.127: INFO: Pod pod-projected-configmaps-c4c663a4-364c-4faa-90a0-cc66ed6eef13 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:39:11.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9220" for this suite.
•{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":272,"skipped":4658,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:39:11.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating projection with secret that has name projected-secret-test-map-b24f3ec8-c203-477b-aef9-d4bf50596168
STEP: Creating a pod to test consume secrets
Aug 13 19:39:11.289: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ab91ccb3-c544-4a21-94f1-fd21664366bd" in namespace "projected-1504" to be "Succeeded or Failed"
Aug 13 19:39:11.331: INFO: Pod "pod-projected-secrets-ab91ccb3-c544-4a21-94f1-fd21664366bd": Phase="Pending", Reason="", readiness=false. Elapsed: 41.858211ms
Aug 13 19:39:13.335: INFO: Pod "pod-projected-secrets-ab91ccb3-c544-4a21-94f1-fd21664366bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045424821s
Aug 13 19:39:15.340: INFO: Pod "pod-projected-secrets-ab91ccb3-c544-4a21-94f1-fd21664366bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051299847s
STEP: Saw pod success
Aug 13 19:39:15.341: INFO: Pod "pod-projected-secrets-ab91ccb3-c544-4a21-94f1-fd21664366bd" satisfied condition "Succeeded or Failed"
Aug 13 19:39:15.343: INFO: Trying to get logs from node kali-worker pod pod-projected-secrets-ab91ccb3-c544-4a21-94f1-fd21664366bd container projected-secret-volume-test: 
STEP: delete the pod
Aug 13 19:39:15.376: INFO: Waiting for pod pod-projected-secrets-ab91ccb3-c544-4a21-94f1-fd21664366bd to disappear
Aug 13 19:39:15.460: INFO: Pod pod-projected-secrets-ab91ccb3-c544-4a21-94f1-fd21664366bd no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:39:15.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1504" for this suite.
•{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":275,"completed":273,"skipped":4676,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:39:15.468: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:42
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: Creating the pod
Aug 13 19:39:23.190: INFO: Successfully updated pod "labelsupdatec8e5dcb9-0d4b-436a-941c-759658cc8d11"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:39:26.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6505" for this suite.

• [SLOW TEST:11.294 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":275,"completed":274,"skipped":4688,"failed":0}
[sig-api-machinery] Secrets 
  should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:178
STEP: Creating a kubernetes client
Aug 13 19:39:26.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should patch a secret [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:703
STEP: creating a secret
STEP: listing secrets in all namespaces to ensure that there are more than zero
STEP: patching the secret
STEP: deleting the secret using a LabelSelector
STEP: listing secrets in all namespaces, searching for label name and value in patch
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:179
Aug 13 19:39:27.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9467" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should patch a secret [Conformance]","total":275,"completed":275,"skipped":4688,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSAug 13 19:39:27.710: INFO: Running AfterSuite actions on all nodes
Aug 13 19:39:27.710: INFO: Running AfterSuite actions on node 1
Aug 13 19:39:27.710: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":275,"completed":275,"skipped":4717,"failed":0}

Ran 275 of 4992 Specs in 5341.168 seconds
SUCCESS! -- 275 Passed | 0 Failed | 0 Pending | 4717 Skipped
PASS